Jan 27 18:05:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 18:05:43 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:43 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:05:44 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 18:05:45 crc kubenswrapper[4907]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.443506 4907 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452676 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452742 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452750 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452758 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452764 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452770 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452778 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452785 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452793 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452799 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452806 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452811 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452816 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452822 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452828 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452835 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452842 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452850 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452857 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452864 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452879 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452886 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452892 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452897 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452903 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452908 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452913 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452919 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452924 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452929 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452935 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452940 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452945 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452951 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452957 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452962 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452967 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452973 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452978 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452983 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452989 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452994 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.452999 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453005 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453010 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453015 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453021 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453026 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453032 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453039 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453044 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453052 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453061 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453069 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453077 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453083 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453090 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453097 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453106 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453111 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453117 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453122 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453128 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453133 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453138 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453143 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453148 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453154 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453159 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453164 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.453170 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454096 4907 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454124 4907 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454138 4907 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454147 4907 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454155 4907 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454162 4907 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454171 4907 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454179 4907 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454185 4907 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454190 4907 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454196 4907 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454202 4907 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454208 4907 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454213 4907 flags.go:64] FLAG: --cgroup-root="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454221 4907 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454227 4907 flags.go:64] FLAG: --client-ca-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454233 4907 flags.go:64] FLAG: --cloud-config="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454238 4907 flags.go:64] FLAG: --cloud-provider="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454244 4907 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454250 4907 flags.go:64] FLAG: --cluster-domain="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454256 4907 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454262 4907 flags.go:64] FLAG: --config-dir="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454267 4907 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454273 4907 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454281 4907 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454287 4907 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454293 4907 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454299 4907 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454305 4907 flags.go:64] FLAG: --contention-profiling="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454310 4907 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454315 4907 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454320 4907 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454324 4907 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454331 4907 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454336 4907 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454341 4907 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454345 4907 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454351 4907 flags.go:64] FLAG: --enable-server="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454355 4907 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454362 4907 flags.go:64] FLAG: --event-burst="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454367 4907 flags.go:64] FLAG: --event-qps="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454372 4907 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454377 4907 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454381 4907 flags.go:64] FLAG: --eviction-hard="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454388 4907 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454394 4907 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454399 4907 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454404 4907 flags.go:64] FLAG: --eviction-soft="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454408 4907 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454413 4907 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454417 4907 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454422 4907 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454429 4907 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454435 4907 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454440 4907 flags.go:64] FLAG: --feature-gates="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454446 4907 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454451 4907 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454455 4907 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454460 4907 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454465 4907 flags.go:64] FLAG: --healthz-port="10248" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454469 4907 flags.go:64] FLAG: --help="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454474 4907 flags.go:64] FLAG: --hostname-override="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454478 4907 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454483 4907 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454487 4907 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454492 4907 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454496 4907 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454501 4907 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454505 4907 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454509 4907 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454514 4907 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454518 4907 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454523 4907 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454528 4907 flags.go:64] FLAG: --kube-reserved="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454532 4907 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454536 4907 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454541 4907 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454546 4907 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454550 4907 flags.go:64] FLAG: --lock-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454571 4907 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454576 4907 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454581 4907 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454591 4907 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454596 4907 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454603 4907 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454608 4907 flags.go:64] FLAG: --logging-format="text" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454613 4907 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454620 4907 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454625 4907 flags.go:64] FLAG: --manifest-url="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454631 4907 flags.go:64] FLAG: --manifest-url-header="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454638 4907 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454643 4907 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454650 4907 flags.go:64] FLAG: --max-pods="110" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454655 4907 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454661 4907 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454667 4907 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454672 4907 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454679 4907 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454684 4907 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454690 4907 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454705 4907 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454711 4907 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454716 4907 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454720 4907 flags.go:64] FLAG: --pod-cidr="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454725 4907 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454734 4907 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454738 4907 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454743 4907 flags.go:64] FLAG: --pods-per-core="0" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454747 4907 flags.go:64] FLAG: --port="10250" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454754 4907 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454759 4907 flags.go:64] FLAG: --provider-id="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454763 4907 flags.go:64] FLAG: --qos-reserved="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454767 4907 flags.go:64] FLAG: --read-only-port="10255" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454772 4907 flags.go:64] FLAG: --register-node="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454776 4907 flags.go:64] FLAG: --register-schedulable="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454780 4907 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454788 4907 flags.go:64] FLAG: --registry-burst="10" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454792 4907 flags.go:64] FLAG: --registry-qps="5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454796 4907 flags.go:64] FLAG: --reserved-cpus="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454800 4907 flags.go:64] FLAG: --reserved-memory="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454806 4907 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454811 4907 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454815 4907 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454820 4907 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454823 4907 flags.go:64] FLAG: --runonce="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454828 4907 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454832 4907 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454836 4907 flags.go:64] FLAG: --seccomp-default="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454841 4907 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454845 4907 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454850 4907 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454855 4907 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454860 4907 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454865 4907 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454869 4907 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454874 4907 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454879 4907 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454883 4907 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454887 4907 flags.go:64] FLAG: --system-cgroups="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454891 4907 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454899 4907 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454916 4907 flags.go:64] FLAG: --tls-cert-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454920 4907 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454926 4907 flags.go:64] FLAG: --tls-min-version="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454930 4907 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454936 4907 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454940 4907 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454945 4907 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454949 4907 flags.go:64] FLAG: --v="2" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454956 4907 flags.go:64] FLAG: --version="false" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454963 4907 flags.go:64] FLAG: --vmodule="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454969 4907 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.454973 4907 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455099 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455106 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455110 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455115 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455119 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455123 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455128 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455132 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455135 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455139 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455142 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455147 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455153 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455157 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455160 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455164 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455169 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455174 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455178 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455181 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455187 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455191 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455195 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455198 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455202 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455206 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455210 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455213 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455218 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455221 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455225 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455229 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455232 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455237 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455241 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455245 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455249 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455253 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455258 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455262 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455266 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455270 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455273 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455278 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455288 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455292 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455296 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455300 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455304 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455307 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455311 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455315 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455320 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455324 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455327 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455332 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455362 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455367 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455372 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455377 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455382 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455386 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455392 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455396 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455402 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455407 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455411 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455415 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455419 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455423 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.455427 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.455443 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.466387 4907 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.466721 4907 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466814 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466828 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466833 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466837 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466843 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466847 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466851 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466854 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466858 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466862 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466865 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466868 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466872 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466875 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466879 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466883 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466888 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466896 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466900 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466904 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466908 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466912 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466915 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466919 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466922 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466926 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466930 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466934 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466938 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466942 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466946 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466951 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466956 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466961 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466967 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466972 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466976 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466980 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466983 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466987 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466992 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.466997 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467001 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467005 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467009 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467012 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467016 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467020 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467024 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467027 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467031 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467035 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467038 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467042 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467045 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467049 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467053 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467056 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467060 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467064 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467067 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467071 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467075 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467079 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467083 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467086 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467090 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467093 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467097 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467100 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467104 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.467113 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467237 4907 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467246 4907 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467251 4907 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467255 4907 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467260 4907 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467264 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467268 4907 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467273 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467278 4907 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467283 4907 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467289 4907 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467294 4907 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467298 4907 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467302 4907 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467307 4907 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467311 4907 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467314 4907 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467319 4907 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467322 4907 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467326 4907 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467330 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467335 4907 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467340 4907 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467344 4907 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467383 4907 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467391 4907 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467395 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467399 4907 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467404 4907 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467407 4907 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467412 4907 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467416 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467421 4907 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467425 4907 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467430 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467434 4907 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467438 4907 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467443 4907 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467446 4907 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467450 4907 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467454 4907 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467458 4907 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467462 4907 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467465 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467469 4907 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467473 4907 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467477 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467481 4907 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467484 4907 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467488 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467491 4907 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467495 4907 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467499 4907 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467502 4907 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467506 4907 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467509 4907 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467516 4907 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467520 4907 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467524 4907 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467528 4907 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467532 4907 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467535 4907 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467539 4907 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467543 4907 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467547 4907 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467551 4907 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467568 4907 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467572 4907 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467576 4907 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467580 4907 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.467585 4907 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.467592 4907 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.468851 4907 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.480435 4907 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.480661 4907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.482837 4907 server.go:997] "Starting client certificate rotation" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.482902 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.484003 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 08:24:48.887067554 +0000 UTC Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.484161 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.513802 4907 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.519698 4907 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.520993 4907 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.540763 4907 log.go:25] "Validated CRI v1 runtime API" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.600781 4907 log.go:25] "Validated CRI v1 image API" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.603198 4907 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.614406 4907 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-18-01-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.614435 4907 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.630843 4907 manager.go:217] Machine: {Timestamp:2026-01-27 18:05:45.625573775 +0000 UTC m=+0.754856407 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0be71cc9-e3e6-47b6-b7c1-354451a0e2c5 BootID:a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f4:37:48 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f4:37:48 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2d:a0:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1e:ee:22 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:0a:33 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:83:8f:1a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:51:71:61:4a:99 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:81:e8:8e:52:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.631087 4907 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.631265 4907 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633533 4907 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633727 4907 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633771 4907 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633966 4907 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.633976 4907 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634517 4907 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634535 4907 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634734 4907 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.634847 4907 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641205 4907 kubelet.go:418] "Attempting to sync node with API server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641230 4907 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641247 4907 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641263 4907 kubelet.go:324] "Adding apiserver pod source" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.641274 4907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.655422 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.655643 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.655731 4907 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.656362 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.656465 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.658899 4907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.663269 4907 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665025 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665053 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665062 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665070 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665084 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665094 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665106 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665120 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665130 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665142 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665176 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.665184 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.672382 4907 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.673274 4907 server.go:1280] "Started kubelet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.673761 4907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.674302 4907 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.674820 4907 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 18:05:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677325 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677459 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677502 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:58:15.511369193 +0000 UTC Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677475 4907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.677732 4907 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677844 4907 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677858 4907 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.677963 4907 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.678081 4907 server.go:460] "Adding debug handlers to kubelet server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.679275 4907 factory.go:55] Registering systemd factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.679299 4907 factory.go:221] Registration of the systemd container factory successfully Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.688737 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.688870 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.689018 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689320 4907 factory.go:153] Registering CRI-O factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689475 4907 factory.go:221] Registration of the crio container factory successfully Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689685 4907 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689801 4907 factory.go:103] Registering Raw factory Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.689893 4907 manager.go:1196] Started watching for new ooms in manager Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.690763 4907 manager.go:319] Starting recovery of all containers Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.688929 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea8a898b63c40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:05:45.67323552 +0000 UTC m=+0.802518152,LastTimestamp:2026-01-27 18:05:45.67323552 +0000 UTC m=+0.802518152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698858 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698935 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698947 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698957 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698968 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698984 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.698995 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699003 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699018 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699028 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699038 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699051 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699061 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699075 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699084 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699094 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699123 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699132 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699149 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699159 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699167 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699175 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699184 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699193 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699202 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699213 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699225 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699235 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699246 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699255 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699263 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699273 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699283 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699293 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699304 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699312 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699321 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699330 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699339 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699348 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699360 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699370 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699381 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699390 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699399 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699409 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699418 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699429 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699440 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699449 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699459 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699469 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699482 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699494 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699508 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699520 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699633 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699647 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699657 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699668 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699678 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699688 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699701 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699711 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699720 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699729 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699739 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699759 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699768 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699778 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699787 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699797 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699822 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699832 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699843 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699853 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699866 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699879 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699889 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699900 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699910 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699921 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699930 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699941 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699955 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699964 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699976 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699986 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.699998 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700009 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700020 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700030 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700040 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700049 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700058 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700069 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700079 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700089 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700099 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700109 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700119 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700130 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700145 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700155 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700165 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700176 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700187 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700198 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700214 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700226 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700236 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700246 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700255 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700266 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700275 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700285 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700297 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700305 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700316 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700326 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700336 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700345 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700355 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.700366 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718651 4907 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718775 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718790 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718829 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718850 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718867 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718880 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718899 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718912 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718931 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718947 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718965 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.718988 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719005 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719020 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719039 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719055 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719073 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719093 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719112 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719150 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719164 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719244 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719259 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719280 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719307 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719321 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719352 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719368 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719384 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719525 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719540 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719609 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719625 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719644 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719667 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719683 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719730 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719749 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719765 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719792 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719810 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719828 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719853 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719872 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719895 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719914 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719932 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719955 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719972 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.719997 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720017 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720032 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720051 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720067 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720085 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720103 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720120 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720180 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720204 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720226 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720244 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720263 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720284 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720302 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720327 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720342 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720358 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720379 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720397 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720422 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720437 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720485 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720499 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720520 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720535 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720566 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720584 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720597 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720616 4907 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720725 4907 reconstruct.go:97] "Volume reconstruction finished" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.720734 4907 reconciler.go:26] "Reconciler: start to sync state" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.726137 4907 manager.go:324] Recovery completed Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.736191 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738865 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.738913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740722 4907 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740753 4907 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.740789 4907 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.741171 4907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.744800 4907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.746655 4907 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.746706 4907 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.746750 4907 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 18:05:45 crc kubenswrapper[4907]: W0127 18:05:45.748167 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.748278 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.778870 4907 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.783179 4907 policy_none.go:49] "None policy: Start" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.784096 4907 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.784124 4907 state_mem.go:35] "Initializing new in-memory state store" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.847283 4907 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850015 4907 manager.go:334] "Starting Device Plugin manager" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850074 4907 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850090 4907 server.go:79] "Starting device plugin registration server" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850723 4907 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.850747 4907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851052 4907 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851217 4907 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.851238 4907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.858416 4907 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.889967 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.951688 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:45 crc kubenswrapper[4907]: I0127 18:05:45.952777 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:45 crc kubenswrapper[4907]: E0127 18:05:45.953498 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.047959 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.048254 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050495 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.050729 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.051056 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.051108 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052386 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052545 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.052611 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.053234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.054457 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055011 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055186 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.055275 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057530 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057680 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.057741 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059333 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.059482 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.060796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129864 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.129907 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130233 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130370 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130659 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130927 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.130983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.154077 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.155975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.156102 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.157152 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232421 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232657 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232716 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232733 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232774 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232905 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232923 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232943 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233005 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.232967 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233115 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233114 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233207 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233538 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233596 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233621 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233641 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.233746 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.291348 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.398102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.410599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.435865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.470152 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.482238 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949 WatchSource:0}: Error finding container 64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949: Status 404 returned error can't find the container with id 64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949 Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.483875 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.484437 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101 WatchSource:0}: Error finding container a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101: Status 404 returned error can't find the container with id a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.484929 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98 WatchSource:0}: Error finding container c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98: Status 404 returned error can't find the container with id c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.499287 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36 WatchSource:0}: Error finding container 1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36: Status 404 returned error can't find the container with id 1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36 Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.502621 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3 WatchSource:0}: Error finding container 618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3: Status 404 returned error can't find the container with id 618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3 Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.558278 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.559930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.559982 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.560032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.560068 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.560501 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.678469 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:50:40.032565152 +0000 UTC Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.678975 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.753898 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"64f9c0ad888ae684ba98229d133b658ad84135d0d168744f6ce0cbd8a0112949"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.755131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c5452aa810001a1b63e883a7ea0df0f1e83608f344ead54b66cc2c6c0e819c98"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.755995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"618fa150b74bb4b67967025aea123a509edf59f11353ccfe82c67ece7452ddd3"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.757292 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ec54b854df732bd988b8aea5c43d5c9fa1e6404e011f563cd560e3104c18e36"} Jan 27 18:05:46 crc kubenswrapper[4907]: I0127 18:05:46.758268 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0d25d4006e4c0726ef7f312de1f029b78df8d90a01bc75ae8a5db5f802cc101"} Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.883702 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.883780 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.893692 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.893741 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:46 crc kubenswrapper[4907]: W0127 18:05:46.906686 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:46 crc kubenswrapper[4907]: E0127 18:05:46.906730 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.092211 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 27 18:05:47 crc kubenswrapper[4907]: W0127 18:05:47.157380 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.157515 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.360959 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.362524 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.363098 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.568967 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:47 crc kubenswrapper[4907]: E0127 18:05:47.570270 4907 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.678647 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:06:35.353799524 +0000 UTC Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.678857 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.761997 4907 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.762039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.762109 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763230 4907 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763342 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763715 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.763926 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764192 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.764200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766028 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.766202 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767402 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.767437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.768479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.768746 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769898 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.769946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770284 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d" exitCode=0 Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770298 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d"} Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.770416 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:47 crc kubenswrapper[4907]: I0127 18:05:47.771436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: W0127 18:05:48.675339 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.675962 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.678798 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.679431 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:55:26.977016952 +0000 UTC Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.693471 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.774929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.777187 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.779216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782588 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7" exitCode=0 Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.782749 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.783670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.785519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8"} Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.785721 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.791671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: W0127 18:05:48.866165 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.866312 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.963529 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964765 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:48 crc kubenswrapper[4907]: I0127 18:05:48.964805 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:48 crc kubenswrapper[4907]: E0127 18:05:48.965384 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 27 18:05:49 crc kubenswrapper[4907]: W0127 18:05:49.547410 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:49 crc kubenswrapper[4907]: E0127 18:05:49.547522 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.678671 4907 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.679592 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:39:10.61299432 +0000 UTC Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790604 4907 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3" exitCode=0 Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790690 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.790796 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792744 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.792832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794437 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794493 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.794497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.795860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798291 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.798443 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.799515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803899 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803925 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8"} Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.803899 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805058 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:49 crc kubenswrapper[4907]: I0127 18:05:49.805070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: W0127 18:05:50.298314 4907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 27 18:05:50 crc kubenswrapper[4907]: E0127 18:05:50.298420 4907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.454276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.544471 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.679674 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:22:09.729665723 +0000 UTC Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.777659 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814076 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814156 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f"} Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814103 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814170 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814210 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.814161 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815313 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:50 crc kubenswrapper[4907]: I0127 18:05:50.815446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.680138 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:58:50.930046981 +0000 UTC Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.688391 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.701166 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.709445 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821655 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035"} Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.821967 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.822006 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.824599 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.827971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.828945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829181 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:51 crc kubenswrapper[4907]: I0127 18:05:51.829194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.166086 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167759 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.167872 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.429998 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.680608 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:27:20.120320942 +0000 UTC Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824510 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824511 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.824746 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:52 crc kubenswrapper[4907]: I0127 18:05:52.826748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:53 crc kubenswrapper[4907]: I0127 18:05:53.681026 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:39:31.550545982 +0000 UTC Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.395015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.395330 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397283 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.397476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:54 crc kubenswrapper[4907]: I0127 18:05:54.681152 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:53:25.440928903 +0000 UTC Jan 27 18:05:55 crc kubenswrapper[4907]: I0127 18:05:55.681544 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:33:58.870535369 +0000 UTC Jan 27 18:05:55 crc kubenswrapper[4907]: E0127 18:05:55.858507 4907 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.266380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.266649 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.268263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.277860 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.682089 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:54:42.103823268 +0000 UTC Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.835802 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:56 crc kubenswrapper[4907]: I0127 18:05:56.836879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.244258 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.244539 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.248297 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:57 crc kubenswrapper[4907]: I0127 18:05:57.683174 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:37:11.955700687 +0000 UTC Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.669056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.669725 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.671683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.678140 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.684215 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:14:42.883939648 +0000 UTC Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.840896 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:05:58 crc kubenswrapper[4907]: I0127 18:05:58.842656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:05:59 crc kubenswrapper[4907]: I0127 18:05:59.685270 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:31:59.751730374 +0000 UTC Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.304510 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.304616 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.310150 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.310227 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:06:00 crc kubenswrapper[4907]: I0127 18:06:00.685741 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:56:36.990930832 +0000 UTC Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.669379 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.669498 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.686013 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:38:33.711033084 +0000 UTC Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.701742 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.701822 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.719757 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.719983 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.720522 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.720659 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721343 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.721400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.727522 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.848679 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849178 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849276 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:01 crc kubenswrapper[4907]: I0127 18:06:01.849515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.568731 4907 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.568802 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:06:02 crc kubenswrapper[4907]: I0127 18:06:02.686760 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:27:13.68824493 +0000 UTC Jan 27 18:06:03 crc kubenswrapper[4907]: I0127 18:06:03.687636 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:16:10.614043394 +0000 UTC Jan 27 18:06:04 crc kubenswrapper[4907]: I0127 18:06:04.688025 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:52:51.44747346 +0000 UTC Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.284213 4907 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.285101 4907 trace.go:236] Trace[1290662536]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:53.883) (total time: 11401ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1290662536]: ---"Objects listed" error: 11401ms (18:06:05.284) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1290662536]: [11.401862523s] [11.401862523s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.285140 4907 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.306949 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.309026 4907 trace.go:236] Trace[1461441112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:54.294) (total time: 11014ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1461441112]: ---"Objects listed" error: 11014ms (18:06:05.308) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1461441112]: [11.014146868s] [11.014146868s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.309085 4907 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310534 4907 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310633 4907 trace.go:236] Trace[1335700862]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:54.272) (total time: 11038ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1335700862]: ---"Objects listed" error: 11038ms (18:06:05.310) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[1335700862]: [11.038245768s] [11.038245768s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.310672 4907 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.314087 4907 trace.go:236] Trace[752683230]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:05:52.724) (total time: 12586ms): Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[752683230]: ---"Objects listed" error: 12586ms (18:06:05.311) Jan 27 18:06:05 crc kubenswrapper[4907]: Trace[752683230]: [12.586590125s] [12.586590125s] END Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.314199 4907 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.319786 4907 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.652975 4907 apiserver.go:52] "Watching apiserver" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655081 4907 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655387 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.655924 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656103 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656206 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656123 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.656243 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.656443 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.656722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.657439 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.657538 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.658052 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.668177 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669113 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669447 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669642 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669759 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.669809 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.679879 4907 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.680775 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.688728 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:52:39.906880258 +0000 UTC Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.704289 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713371 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713504 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.713655 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714766 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715887 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714545 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.714706 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715240 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.715838 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716580 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717192 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716440 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.716656 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717814 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.717940 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718048 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718285 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718293 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718431 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718464 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718491 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718510 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718570 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718593 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718635 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718653 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718689 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718709 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718725 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718745 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718764 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718801 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718822 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718839 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718860 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718877 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718998 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718537 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718580 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.718831 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719011 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719210 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719237 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719253 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719303 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719322 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719339 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719377 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719398 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719460 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719482 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719597 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719615 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719654 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719722 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719741 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719826 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719894 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719916 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719937 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719978 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719998 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720017 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720057 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720078 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720103 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720152 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720175 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720220 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720309 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720375 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720396 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720415 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720474 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720492 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720530 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720575 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720598 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720624 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720673 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720700 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720749 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720768 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720790 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720927 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720946 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721009 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721033 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721074 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721159 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721178 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721215 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721237 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721258 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721297 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721319 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721338 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721375 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721426 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721526 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719258 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727827 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719486 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719694 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719651 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719788 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719881 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.719992 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728230 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720096 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720106 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720148 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720205 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720596 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720600 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720965 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.720977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721283 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721353 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.721537 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.221514398 +0000 UTC m=+21.350797010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728610 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721631 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.721894 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722204 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722876 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722932 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728684 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728749 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728835 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728876 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728954 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.728989 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729025 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729063 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729102 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729141 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729181 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729243 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729279 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729316 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729351 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729385 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730002 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730134 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730208 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730247 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730284 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730383 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730422 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730456 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730498 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730590 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730704 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730819 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730930 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730975 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731092 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731131 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731317 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731544 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731605 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731645 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731723 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731760 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731835 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731876 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731915 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732119 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732155 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732195 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732179 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732268 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732756 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732842 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733040 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733105 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733219 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733288 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733352 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733414 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733471 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733695 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734205 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734321 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734404 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734520 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734704 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734853 4907 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734889 4907 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734923 4907 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734956 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.734987 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735023 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735055 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735085 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735114 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735146 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735175 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735206 4907 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735238 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735269 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735309 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735343 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735385 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722968 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.722989 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723383 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.723900 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724078 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724187 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724283 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.724786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725015 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725118 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725400 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725503 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.725944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726060 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726524 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726814 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.726962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727202 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727218 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727455 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727758 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.727810 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729284 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729638 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.729860 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730095 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730290 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730792 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.730940 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.731682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.732790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733296 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733324 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.733776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735689 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735827 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735841 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.735930 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737635 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.737747 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738323 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738740 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738782 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.738959 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739154 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739259 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739288 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739349 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739417 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.739979 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740527 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740572 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.740872 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741161 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741220 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741254 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741729 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.741842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742342 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.742762 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743392 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743581 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.743626 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744518 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744900 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745232 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745236 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.744548 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745874 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.745953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746326 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746417 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746442 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746512 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746719 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.746796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747023 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747150 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747347 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748396 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748484 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747594 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747612 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747752 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.747730 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748354 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748381 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748465 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748607 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748732 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748759 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.748748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749020 4907 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.749549 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.750835 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751573 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.751847 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752051 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752296 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.752502 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753029 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.753091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.753117 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753306 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.253285185 +0000 UTC m=+21.382567797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.753448 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.754045 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.254004517 +0000 UTC m=+21.383287199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.754635 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.756007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.756679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.757262 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.757846 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.765007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.765929 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.767388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.767864 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768088 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768330 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.768855 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.769020 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.769216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772624 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772658 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772678 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.772783 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.272733031 +0000 UTC m=+21.402015833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.773098 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.773264 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.777724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777889 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777915 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.777933 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: E0127 18:06:05.778002 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:06.277973189 +0000 UTC m=+21.407255801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.779196 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.788026 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.788889 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.790701 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.790919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.791131 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.798458 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.809851 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.813150 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.814207 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.820862 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.830218 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.830951 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.831868 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.832972 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837142 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837212 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837629 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837646 4907 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837657 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837669 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837680 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837689 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837699 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837708 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837718 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837727 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837735 4907 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837744 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837752 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837761 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837769 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837777 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837786 4907 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837794 4907 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837803 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837813 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837822 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837832 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837841 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837851 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837860 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837868 4907 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837876 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837884 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837893 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837903 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837911 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837920 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837928 4907 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837939 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837948 4907 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837957 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837965 4907 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837974 4907 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837982 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.837991 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838001 4907 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838009 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838019 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838027 4907 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838034 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838043 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838051 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838059 4907 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838069 4907 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838077 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838086 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838097 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838108 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838129 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838138 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838146 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838157 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838166 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838179 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838190 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838200 4907 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838209 4907 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838217 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838225 4907 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838242 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838252 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838261 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838270 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838277 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838286 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838294 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838301 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838309 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838317 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838326 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838334 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838342 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838351 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838360 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838368 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838376 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838384 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838392 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838401 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838424 4907 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838434 4907 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838442 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838454 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838462 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838471 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838479 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838489 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838497 4907 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838506 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838515 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838523 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838531 4907 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838546 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838571 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838581 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838589 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838596 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838604 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838612 4907 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838620 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838628 4907 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838637 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838644 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838652 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838660 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838669 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838677 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838685 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838692 4907 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838702 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838711 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838719 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838726 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838734 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838744 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838752 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838760 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838769 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838777 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838785 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838794 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838802 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838811 4907 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838820 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838829 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838837 4907 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838846 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838856 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838864 4907 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838874 4907 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838882 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838890 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838899 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838907 4907 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838915 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838923 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838932 4907 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838940 4907 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838948 4907 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838956 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838964 4907 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838973 4907 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838981 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838989 4907 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.838997 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839005 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839013 4907 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839020 4907 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839028 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839036 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839044 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839052 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839065 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839073 4907 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839081 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839090 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839097 4907 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839105 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839112 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839121 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839129 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839137 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839145 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839153 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839160 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839168 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.839950 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.840077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.841281 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.847381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.848409 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.851671 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.857249 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.857797 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.860259 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.860342 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.865691 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.867374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.868480 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.869349 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.876624 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.876880 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.877466 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.877921 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.879475 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.880202 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.880769 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.883734 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.885348 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.886883 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.887744 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.889608 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.890468 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.892862 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.893678 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.894514 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.895104 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.895986 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.897352 4907 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.897521 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.900026 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.900730 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.901843 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.904090 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.904898 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.906111 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.906751 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.907986 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.908920 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.909700 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.910185 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.910839 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.912214 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.912919 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.914164 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.914912 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.916714 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.917301 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.918421 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.919096 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.920256 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.920453 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.921210 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.921786 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.935877 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939524 4907 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939548 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939572 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.939600 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.945850 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.957512 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.971642 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.983837 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.990682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:06:05 crc kubenswrapper[4907]: W0127 18:06:05.998443 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336 WatchSource:0}: Error finding container 718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336: Status 404 returned error can't find the container with id 718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336 Jan 27 18:06:05 crc kubenswrapper[4907]: I0127 18:06:05.999490 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:06:06 crc kubenswrapper[4907]: W0127 18:06:06.005593 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14 WatchSource:0}: Error finding container 5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14: Status 404 returned error can't find the container with id 5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14 Jan 27 18:06:06 crc kubenswrapper[4907]: W0127 18:06:06.017358 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f WatchSource:0}: Error finding container b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f: Status 404 returned error can't find the container with id b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.241850 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.242041 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.242002774 +0000 UTC m=+22.371285386 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.342885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343000 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.342994 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343051 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343060 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343070 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343116 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.34308767 +0000 UTC m=+22.472370292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343123 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343139 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343129312 +0000 UTC m=+22.472411944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343015 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343154 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343160 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343149092 +0000 UTC m=+22.472431714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:06 crc kubenswrapper[4907]: E0127 18:06:06.343187 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:07.343168083 +0000 UTC m=+22.472450695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.688911 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:37:35.970235387 +0000 UTC Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.876694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b8553e24a82c8bf7fa07257fff2cd4a4f3cdb201656d8ef143c494687a05048f"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880320 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880390 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.880408 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5882c636ec09bd3e3e118fa3c3f3eb4ef1888a86c489759f4940637ae0491f14"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.883520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.883670 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"718d6216d67a1c8d9d51d680d37da5b24effe05d8711fd7a0372f425b708f336"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.887228 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.891625 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" exitCode=255 Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.891689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9"} Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.906655 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.907258 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.907679 4907 scope.go:117] "RemoveContainer" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.920618 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.941839 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.962136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.980094 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:06 crc kubenswrapper[4907]: I0127 18:06:06.996356 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.016831 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.033790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.049487 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.068630 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.083645 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.096601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.111416 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.254507 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.254767 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.254729204 +0000 UTC m=+24.384011816 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.277300 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.290662 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.291981 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.294283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.307522 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.323259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.343059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355632 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355726 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.355756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355861 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355904 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355919 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355929 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355993 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.355967615 +0000 UTC m=+24.485250237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356032 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.356022047 +0000 UTC m=+24.485304669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.355882 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356052 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356075 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356093 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356080 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.356072369 +0000 UTC m=+24.485354991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.356155 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:09.35613208 +0000 UTC m=+24.485414702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.358666 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.372631 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.383752 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.401798 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.416840 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.430390 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.451790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.467538 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.493331 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.518968 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.536641 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.689170 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:39:23.088228689 +0000 UTC Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747805 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747862 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.747977 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.747967 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.748069 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:07 crc kubenswrapper[4907]: E0127 18:06:07.748245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.896303 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.898873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01"} Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.898926 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.914685 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.929259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.950139 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.971115 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:07 crc kubenswrapper[4907]: I0127 18:06:07.988190 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.004208 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.022175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.036919 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.673288 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.677030 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.680489 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.689747 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:24:28.529424852 +0000 UTC Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.695027 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.708017 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.721967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.740296 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.755679 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.770087 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.784856 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.797706 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.818934 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.834249 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.846903 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.862121 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.875281 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.902654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.904659 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586"} Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.922776 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.935967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.947176 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.961293 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.977282 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:08 crc kubenswrapper[4907]: I0127 18:06:08.993298 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.009987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.026985 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.053064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.070415 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.088220 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.102095 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.274765 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.275030 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.274985931 +0000 UTC m=+28.404268573 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376490 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376770 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376857 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.376775 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376884 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376946 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.376793 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377040 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377007775 +0000 UTC m=+28.506290427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377087 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377172 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377197 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377106 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377055767 +0000 UTC m=+28.506338419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377337 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377299734 +0000 UTC m=+28.506582386 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.377378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.377355726 +0000 UTC m=+28.506638378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.690773 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:05:31.712301113 +0000 UTC Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747595 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747603 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.747775 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:09 crc kubenswrapper[4907]: I0127 18:06:09.747831 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.747850 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:09 crc kubenswrapper[4907]: E0127 18:06:09.748019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.413424 4907 csr.go:261] certificate signing request csr-x65vj is approved, waiting to be issued Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.427469 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9plnb"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.427854 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.431484 4907 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.431540 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432254 4907 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: secrets "node-ca-dockercfg-4777p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432280 4907 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432286 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-ca-dockercfg-4777p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432302 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.432320 4907 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 27 18:06:10 crc kubenswrapper[4907]: E0127 18:06:10.432371 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.438238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-n4rxh"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.439802 4907 csr.go:257] certificate signing request csr-x65vj is issued Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.439975 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449388 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449412 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.449588 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.484352 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.524790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.550045 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587337 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587391 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587414 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587581 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.587681 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.602398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.619520 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.638438 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.653932 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.666734 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.677703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.686855 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688158 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688294 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688423 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/317dc29e-e919-4bac-894d-e54b69538c31-hosts-file\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.688713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/195088d8-09aa-4943-8825-ddd4cb453056-host\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.691067 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:30:46.613308311 +0000 UTC Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.705259 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.709047 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t69cj\" (UniqueName: \"kubernetes.io/projected/317dc29e-e919-4bac-894d-e54b69538c31-kube-api-access-t69cj\") pod \"node-resolver-n4rxh\" (UID: \"317dc29e-e919-4bac-894d-e54b69538c31\") " pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.721525 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.736445 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.755177 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4rxh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.755488 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: W0127 18:06:10.768202 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317dc29e_e919_4bac_894d_e54b69538c31.slice/crio-9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a WatchSource:0}: Error finding container 9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a: Status 404 returned error can't find the container with id 9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.781881 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.800882 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.817922 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.834021 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.848786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.861483 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.882155 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.910498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4rxh" event={"ID":"317dc29e-e919-4bac-894d-e54b69538c31","Type":"ContainerStarted","Data":"9591044c5bf19c630a7dd7adc8fa94e183007bd089d27d7a085d467442f0416a"} Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.956313 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wgvjh"] Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.956861 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.959492 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960101 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960326 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960395 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.960520 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.978205 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:10 crc kubenswrapper[4907]: I0127 18:06:10.988391 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.008314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.026883 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.056735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.092229 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093474 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093543 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093675 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.093778 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.122892 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.140732 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.159406 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.176864 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.193051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194352 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.194673 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-rootfs\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.195416 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.201170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-proxy-tls\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.221250 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59rv\" (UniqueName: \"kubernetes.io/projected/437f8dd5-d37d-4b51-a08f-8c68b3bc038a-kube-api-access-n59rv\") pod \"machine-config-daemon-wgvjh\" (UID: \"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\") " pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.226603 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.267907 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.280228 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.289675 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/195088d8-09aa-4943-8825-ddd4cb453056-serviceca\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.362704 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jqfkt"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.363375 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fgtpz"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.363692 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.364055 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.365078 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.365953 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.366978 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367011 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367470 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367621 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367765 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.367871 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.369050 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371381 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371416 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371426 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371390 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371447 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371450 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371487 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371488 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371517 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371534 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371535 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371461 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: W0127 18:06:11.371539 4907 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.371637 4907 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.384759 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.397213 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.411059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.423314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.435640 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.441585 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 18:01:10 +0000 UTC, rotation deadline is 2026-11-20 01:58:00.209791915 +0000 UTC Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.441633 4907 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7111h51m48.768164158s for next certificate rotation Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.457315 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.473757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.492944 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496819 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496876 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496961 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.496985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497045 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497133 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497203 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497299 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497327 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497532 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497569 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497619 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497638 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497681 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497707 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497732 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497757 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497781 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497808 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497889 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.497966 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498028 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498138 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498241 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.498268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.499018 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.508175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.532364 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.550146 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.568011 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.582456 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599562 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599624 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599620 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599667 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599677 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599774 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599832 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599880 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599946 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599956 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-os-release\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599968 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599980 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-bin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.599988 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600007 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600025 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600045 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600090 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600152 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600210 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600230 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600271 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600336 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600432 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600434 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600527 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-daemon-config\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-system-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600596 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600621 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600623 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600635 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-cnibin\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600658 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-system-cni-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600702 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-kubelet\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600724 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600749 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600753 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-var-lib-cni-multus\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600882 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600989 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-conf-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601060 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-socket-dir-parent\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-netns\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601071 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/985b7738-a27c-4276-8160-c2baa64ab7f6-cni-binary-copy\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601092 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-etc-kubernetes\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601122 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601132 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.600681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-multus-certs\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-os-release\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.602645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.603342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.603775 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/722204a2-dbb1-4b08-909b-09fdea49b7a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604024 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604271 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-multus-cni-dir\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.604322 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-hostroot\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/722204a2-dbb1-4b08-909b-09fdea49b7a0-cnibin\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.601309 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/985b7738-a27c-4276-8160-c2baa64ab7f6-host-run-k8s-cni-cncf-io\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.627749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nl2m\" (UniqueName: \"kubernetes.io/projected/985b7738-a27c-4276-8160-c2baa64ab7f6-kube-api-access-6nl2m\") pod \"multus-fgtpz\" (UID: \"985b7738-a27c-4276-8160-c2baa64ab7f6\") " pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.629988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kncv7\" (UniqueName: \"kubernetes.io/projected/722204a2-dbb1-4b08-909b-09fdea49b7a0-kube-api-access-kncv7\") pod \"multus-additional-cni-plugins-jqfkt\" (UID: \"722204a2-dbb1-4b08-909b-09fdea49b7a0\") " pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.651700 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.671451 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.678694 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fgtpz" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.685037 4907 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.685093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686743 4907 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.686730 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.691770 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:00:26.530111799 +0000 UTC Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.698202 4907 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.698543 4907 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.700519 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.710505 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.721014 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.725095 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.726576 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.741629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.741910 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.745422 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.746897 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.746992 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.747234 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.747303 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.748194 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.748359 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.756723 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.760927 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.765886 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.776650 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.777034 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.779995 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.780007 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.788985 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.799657 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: E0127 18:06:11.800210 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803297 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.803858 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.820136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.833510 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.844772 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.846143 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.854795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997jz\" (UniqueName: \"kubernetes.io/projected/195088d8-09aa-4943-8825-ddd4cb453056-kube-api-access-997jz\") pod \"node-ca-9plnb\" (UID: \"195088d8-09aa-4943-8825-ddd4cb453056\") " pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.857217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.869933 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906572 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.906587 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:11Z","lastTransitionTime":"2026-01-27T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.915998 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4rxh" event={"ID":"317dc29e-e919-4bac-894d-e54b69538c31","Type":"ContainerStarted","Data":"c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.917697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.917745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"eaf287f038c6113d87a2fe2ea86f1dd42eb5276b3a2451ac4f13444e9acd40ce"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.920544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"fce9339f716c71b2355a7ba713d746483ccf60e21cfd2fff6b4b274849362374"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.924326 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerStarted","Data":"74cf4023a97668c3ea831b4d657244a86add4e21c5a78c8e7879854228b82275"} Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.932572 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.951897 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.968980 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.982384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:11 crc kubenswrapper[4907]: I0127 18:06:11.997439 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.009588 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.013479 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.018152 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.021056 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9plnb" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.037314 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: W0127 18:06:12.039273 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195088d8_09aa_4943_8825_ddd4cb453056.slice/crio-eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9 WatchSource:0}: Error finding container eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9: Status 404 returned error can't find the container with id eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9 Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.055757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.072572 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.093506 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112809 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112828 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.112840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.120660 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.165221 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.210275 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.211294 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.212006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.215153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.230155 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.269687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.277614 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.299217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317153 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.317194 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.339146 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.381982 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.418818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420128 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.420176 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.449941 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.451571 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.481757 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.489446 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.495493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.523904 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.544414 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.595417 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: E0127 18:06:12.601577 4907 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 27 18:06:12 crc kubenswrapper[4907]: E0127 18:06:12.602244 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib podName:a62f5e7d-70be-4705-a4b0-d5e4f531cfde nodeName:}" failed. No retries permitted until 2026-01-27 18:06:13.102212075 +0000 UTC m=+28.231494707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib") pod "ovnkube-node-qj9w2" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde") : failed to sync configmap cache: timed out waiting for the condition Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.621054 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.626855 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.650149 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.679589 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.692296 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:46:44.953782435 +0000 UTC Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.718375 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.729516 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.759886 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.800780 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832060 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.832179 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.839707 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.880777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.921586 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.929720 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0" exitCode=0 Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.929828 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.930378 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.931262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9plnb" event={"ID":"195088d8-09aa-4943-8825-ddd4cb453056","Type":"ContainerStarted","Data":"d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.931322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9plnb" event={"ID":"195088d8-09aa-4943-8825-ddd4cb453056","Type":"ContainerStarted","Data":"eaa01126bc1f1bd8bfc9a198c11fbeda5244d44491e61489afd426db084046a9"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.933973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.934157 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:12Z","lastTransitionTime":"2026-01-27T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:12 crc kubenswrapper[4907]: I0127 18:06:12.980256 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.022777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.037666 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.065480 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.100431 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.116643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.117619 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"ovnkube-node-qj9w2\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.140601 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.141062 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.179346 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.192481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:13 crc kubenswrapper[4907]: W0127 18:06:13.204468 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62f5e7d_70be_4705_a4b0_d5e4f531cfde.slice/crio-a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946 WatchSource:0}: Error finding container a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946: Status 404 returned error can't find the container with id a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.221902 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242842 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.242865 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.258764 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.301916 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.318344 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.318514 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.318488062 +0000 UTC m=+36.447770684 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.337322 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.345315 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.378767 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.419983 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420075 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420142 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420164 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420210 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420224 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420153 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420133405 +0000 UTC m=+36.549416017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420269 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420245108 +0000 UTC m=+36.549527730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420081 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420326 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420339 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420327 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.42030254 +0000 UTC m=+36.549585162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.420365 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:21.420357942 +0000 UTC m=+36.549640554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.428010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.447799 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.464764 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.500774 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.540703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.550393 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.576964 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.618277 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652347 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.652369 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.692596 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:01:10.006072975 +0000 UTC Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747269 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747421 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747532 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.747406 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:13 crc kubenswrapper[4907]: E0127 18:06:13.747749 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.754468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.857721 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.937261 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c" exitCode=0 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.937420 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939469 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71" exitCode=0 Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.939522 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.960654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.961232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:13Z","lastTransitionTime":"2026-01-27T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:13 crc kubenswrapper[4907]: I0127 18:06:13.973243 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.016736 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.038310 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.054818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064786 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.064819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.074324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.089950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.110632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.121882 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.135499 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.153590 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.166278 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181831 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.181919 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.183287 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.196672 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.209012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.219468 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.258722 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.284709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.284997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.285276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.299657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.344645 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.381075 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387921 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.387995 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.388023 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.388039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.417039 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.460411 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489880 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.489917 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.504358 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.542321 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.576789 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591812 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.591837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.617785 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.657125 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.693158 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:29:21.02005567 +0000 UTC Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694615 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.694648 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.701633 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.745128 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.780520 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.796781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797031 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.797057 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.818209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.900139 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:14Z","lastTransitionTime":"2026-01-27T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947470 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947579 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947602 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.947611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.950348 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd" exitCode=0 Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.950376 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd"} Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.966955 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:14 crc kubenswrapper[4907]: I0127 18:06:14.984648 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.000150 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.001955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.001992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.002029 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.019629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.031592 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.059191 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.098100 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.104997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.105020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.105038 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.141629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.178336 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206792 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206806 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.206816 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.221334 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.256914 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.298382 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.309765 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.349841 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.377433 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412530 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.412568 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.417568 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.482225 4907 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514527 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.514652 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.617229 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.693911 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:56:30.295421154 +0000 UTC Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720898 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.720984 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747726 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.747886 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.747940 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.748045 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:15 crc kubenswrapper[4907]: E0127 18:06:15.748205 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.773112 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.790609 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.809335 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823745 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.823797 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.825738 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.843595 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.878870 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.910989 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.925998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.926133 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:15Z","lastTransitionTime":"2026-01-27T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.930424 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.944068 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.955208 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e" exitCode=0 Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.955251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e"} Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.956820 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.973496 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:15 crc kubenswrapper[4907]: I0127 18:06:15.989033 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.001795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.014027 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.026207 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029054 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.029082 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.058136 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.102669 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131536 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.131680 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.137726 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.178248 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234826 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234892 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.234928 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.235657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.265359 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.302303 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.338417 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.343889 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.381786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.419647 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.440535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.459696 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.501756 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.546305 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.548234 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.580489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.620683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.649351 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.716423 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:08:19.273087622 +0000 UTC Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.752502 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.855381 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.957657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.957989 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.958033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:16Z","lastTransitionTime":"2026-01-27T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.963085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.966372 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b" exitCode=0 Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.966409 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b"} Jan 27 18:06:16 crc kubenswrapper[4907]: I0127 18:06:16.985341 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.003424 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.021281 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.036814 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.060544 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.062580 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.075703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.087678 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.101291 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.110030 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.123316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.144071 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.156371 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167027 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.167118 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.168950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.180976 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.220284 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.271980 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.380474 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.483482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586883 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586966 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.586993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.587017 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692072 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692177 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.692254 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.718458 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:59:22.571155137 +0000 UTC Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750071 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750194 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750574 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750636 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.750677 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:17 crc kubenswrapper[4907]: E0127 18:06:17.750723 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795509 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795527 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795603 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.795651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.907656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:17Z","lastTransitionTime":"2026-01-27T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.974146 4907 generic.go:334] "Generic (PLEG): container finished" podID="722204a2-dbb1-4b08-909b-09fdea49b7a0" containerID="52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3" exitCode=0 Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.974194 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerDied","Data":"52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3"} Jan 27 18:06:17 crc kubenswrapper[4907]: I0127 18:06:17.987488 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.009198 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010982 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.010996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.011016 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.011041 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.026664 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.042002 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.055031 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.068107 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.087896 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.102442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.113808 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114333 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114363 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.114376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.128465 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.141084 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.159849 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.181975 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.195804 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.206965 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216638 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.216671 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319065 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.319160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422561 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.422645 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525448 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.525490 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628950 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.628979 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.719456 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:34:12.386011194 +0000 UTC Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732258 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.732276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835563 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.835935 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.939501 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:18Z","lastTransitionTime":"2026-01-27T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:18 crc kubenswrapper[4907]: I0127 18:06:18.984781 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" event={"ID":"722204a2-dbb1-4b08-909b-09fdea49b7a0","Type":"ContainerStarted","Data":"018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.000221 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.016682 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.030624 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.043184 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.057290 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.084791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.106743 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.128175 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.143423 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145651 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.145694 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.165875 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.182385 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.200070 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.213489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.226821 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.241619 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.248733 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.255194 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351795 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351849 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.351875 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454483 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.454656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.557774 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661209 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661281 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.661348 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.719721 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:19:38.255929884 +0000 UTC Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747739 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.747753 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.747940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.748038 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:19 crc kubenswrapper[4907]: E0127 18:06:19.748245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.765574 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867912 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.867963 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.971650 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:19Z","lastTransitionTime":"2026-01-27T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:19 crc kubenswrapper[4907]: I0127 18:06:19.992358 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.018010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.035973 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.050216 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.064323 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074658 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.074751 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.079762 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.095874 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.112720 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.127010 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.142926 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.158316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.173050 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.177470 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.189887 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.205122 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.219524 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.239285 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.280163 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382825 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.382927 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486514 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.486741 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.589766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.590358 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.693952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.694900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.720652 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:45:10.103945187 +0000 UTC Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.797954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.798168 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.901320 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:20Z","lastTransitionTime":"2026-01-27T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996827 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996877 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:20 crc kubenswrapper[4907]: I0127 18:06:20.996891 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.003776 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.024006 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.024088 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.047455 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.060682 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.070609 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.084309 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.096786 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.107153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.114931 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.143230 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.157107 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.169754 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.185209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.203038 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209571 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.209785 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.219330 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.235703 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.249563 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.267813 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.292500 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.312809 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.314446 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.327431 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.337773 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.363917 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.387683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.404942 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.405386 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.405629 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.405604206 +0000 UTC m=+52.534886828 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416061 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416130 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.416142 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.419513 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.432978 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.447270 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.464246 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.480037 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.493347 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.506934 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.506985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.507017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.507057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507126 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507143 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507154 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507158 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507209 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507187428 +0000 UTC m=+52.636470030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507170 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507266 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.50723924 +0000 UTC m=+52.636521852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507281 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507300 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507287421 +0000 UTC m=+52.636570093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507322 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507343 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.507429 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:37.507401714 +0000 UTC m=+52.636684386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.508437 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518188 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.518236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.526302 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.621278 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.705675 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.721886 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:56:48.160403388 +0000 UTC Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723524 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723559 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.723574 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747894 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.747794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.747987 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.748092 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.748248 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.764548 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.813632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.814557 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.831746 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.837319 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.842921 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.858630 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.863713 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867679 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.867763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.875070 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.881786 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.886156 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.894698 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.901450 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.905330 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.917731 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.919721 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: E0127 18:06:21.919832 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921935 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.921966 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:21Z","lastTransitionTime":"2026-01-27T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.929025 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.940383 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.952162 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.966036 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.978012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:21 crc kubenswrapper[4907]: I0127 18:06:21.991381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.002680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.015059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.024793 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.127261 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.230241 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333096 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.333186 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.436171 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.539554 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643187 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643229 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.643247 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.722892 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:34:38.078026048 +0000 UTC Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.746265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849551 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.849694 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:22 crc kubenswrapper[4907]: I0127 18:06:22.953747 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:22Z","lastTransitionTime":"2026-01-27T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.007535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.012336 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" exitCode=1 Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.012391 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.021006 4907 scope.go:117] "RemoveContainer" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.039748 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.056984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.057005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.057017 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.059211 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.079510 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.112325 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.133313 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.150683 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.159954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.160434 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.168391 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.185409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.202674 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.223967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.242321 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.260440 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264679 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.264700 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.289078 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.307734 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.321868 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367776 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.367970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.368057 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471651 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.471764 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574476 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.574514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677751 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.677775 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.723217 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:30:01.816931376 +0000 UTC Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748073 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748142 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748227 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.748224 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748370 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:23 crc kubenswrapper[4907]: E0127 18:06:23.748722 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781891 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.781985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.782018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.782038 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.787972 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb"] Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.788846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.793274 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.793376 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.811293 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.823634 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.838383 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.859004 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.881011 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.885212 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.895938 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.912287 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.926608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935771 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935874 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.935924 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.947976 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.969141 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.988436 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:23Z","lastTransitionTime":"2026-01-27T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:23 crc kubenswrapper[4907]: I0127 18:06:23.991140 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.008183 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.027652 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.032293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036470 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.036926 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.037605 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.037886 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.038232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6fe1d896-28da-48d2-9a3e-e4154091a601-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.046448 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.046603 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6fe1d896-28da-48d2-9a3e-e4154091a601-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.063487 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.068101 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z99h\" (UniqueName: \"kubernetes.io/projected/6fe1d896-28da-48d2-9a3e-e4154091a601-kube-api-access-7z99h\") pod \"ovnkube-control-plane-749d76644c-xz9tb\" (UID: \"6fe1d896-28da-48d2-9a3e-e4154091a601\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.079145 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092852 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.092900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.093650 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.106107 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.109375 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: W0127 18:06:24.120798 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe1d896_28da_48d2_9a3e_e4154091a601.slice/crio-154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383 WatchSource:0}: Error finding container 154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383: Status 404 returned error can't find the container with id 154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383 Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.129051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.142907 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.161273 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.179596 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.201842 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.212196 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.226498 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.249023 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.304277 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309265 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.309376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.320832 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.336741 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.351879 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.368796 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.383790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.397276 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.412767 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.514987 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.515007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.515020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.617489 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.720428 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.723772 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:08:26.91955299 +0000 UTC Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.823503 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926868 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:24 crc kubenswrapper[4907]: I0127 18:06:24.926881 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:24Z","lastTransitionTime":"2026-01-27T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.029444 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.042603 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.043279 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/0.log" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.045906 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" exitCode=1 Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046005 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046083 4907 scope.go:117] "RemoveContainer" containerID="d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.046984 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.047245 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049538 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.049636 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" event={"ID":"6fe1d896-28da-48d2-9a3e-e4154091a601","Type":"ContainerStarted","Data":"154e6978828df1ea01ae394124971d2b82cff8fc5ea9441bccab4868eaf80383"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.064240 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.080384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.100917 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.119051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131851 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.131929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.135501 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.150611 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.164921 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.179941 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.201102 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.217625 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.231654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234969 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.234991 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.235007 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.248173 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.262262 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280075 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280491 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.280817 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.280909 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.295209 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.310181 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.324748 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.338487 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.339360 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.352260 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.357631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.357715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.368743 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.379496 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.398995 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.412841 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.423409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.436374 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442583 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.442603 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.453283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.458473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.458574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.458751 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.458856 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:25.95883061 +0000 UTC m=+41.088113412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.466021 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.479169 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.485794 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xxqh\" (UniqueName: \"kubernetes.io/projected/eeaae2ee-c57b-4323-9d3c-563d87d85f08-kube-api-access-9xxqh\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.489880 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.500004 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.513197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.533466 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545337 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.545372 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.548467 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.690861 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.690997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.691065 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.724788 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:42:16.968043399 +0000 UTC Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747743 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.747940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.747961 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.748010 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.748126 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.763656 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.785230 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.793762 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.803476 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.818601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.832460 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.850650 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.877324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.896631 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.897647 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:25Z","lastTransitionTime":"2026-01-27T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.915901 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.929675 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.944987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.958824 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.968711 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.980324 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.991022 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:25 crc kubenswrapper[4907]: I0127 18:06:25.994371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.994583 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:25 crc kubenswrapper[4907]: E0127 18:06:25.994651 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:26.994635219 +0000 UTC m=+42.123917831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.001840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.002929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.008718 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.028540 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92f6d8c4a04ab929182d27d5bd614465085fa5f2eab8d359e3b777d3437ee8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:22Z\\\",\\\"message\\\":\\\"s/externalversions/factory.go:140\\\\nI0127 18:06:22.200678 6203 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.200815 6203 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:06:22.201044 6203 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:06:22.201190 6203 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:06:22.201202 6203 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:06:22.201216 6203 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:06:22.201222 6203 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:06:22.201259 6203 factory.go:656] Stopping watch factory\\\\nI0127 18:06:22.201288 6203 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:06:22.201298 6203 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:06:22.201305 6203 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:06:22.201312 6203 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:06:22.201317 6203 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.055850 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.060757 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:26 crc kubenswrapper[4907]: E0127 18:06:26.061062 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.080824 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.095126 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106461 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.106709 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.119749 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.132860 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.148333 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.160731 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.180032 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.194885 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.208117 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.211838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.212316 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.221064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.231349 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.248950 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.268063 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.293674 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.314993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.315014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.315028 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.322493 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.343482 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417896 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.417968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.418007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.418021 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.520980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.521009 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.521029 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624162 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.624175 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.725444 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:47:23.061337105 +0000 UTC Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727681 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.727922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.728018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.728109 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.747127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:26 crc kubenswrapper[4907]: E0127 18:06:26.747288 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831642 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.831687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:26 crc kubenswrapper[4907]: I0127 18:06:26.933798 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:26Z","lastTransitionTime":"2026-01-27T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.007543 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.007739 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.007841 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:29.007815714 +0000 UTC m=+44.137098326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041286 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.041307 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.144933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145044 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.145080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.248478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351637 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.351739 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454755 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454776 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.454789 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.558262 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661277 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661394 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.661491 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.726208 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:03:10.379438039 +0000 UTC Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748178 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748267 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748390 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.748510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:27 crc kubenswrapper[4907]: E0127 18:06:27.748637 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.764173 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.867634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:27 crc kubenswrapper[4907]: I0127 18:06:27.970177 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:27Z","lastTransitionTime":"2026-01-27T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073240 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.073287 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.176845 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279877 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279886 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279900 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.279911 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382915 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.382999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.383022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.383040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.486857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.589231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692223 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.692269 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.726633 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:18:09.046961465 +0000 UTC Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.747642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:28 crc kubenswrapper[4907]: E0127 18:06:28.748377 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.794948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795040 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.795053 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.897955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:28 crc kubenswrapper[4907]: I0127 18:06:28.898160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:28Z","lastTransitionTime":"2026-01-27T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.000985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.001002 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.030713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.030965 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.031047 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:33.031020287 +0000 UTC m=+48.160302949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.103966 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207168 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207712 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.207819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.310862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311553 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.311804 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.312023 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.416655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.417511 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521166 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.521210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624231 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624286 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.624349 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.726989 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:21:26.415429877 +0000 UTC Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727247 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.727286 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748199 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.748284 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748472 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:29 crc kubenswrapper[4907]: E0127 18:06:29.748685 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830509 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830618 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.830673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:29 crc kubenswrapper[4907]: I0127 18:06:29.934496 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:29Z","lastTransitionTime":"2026-01-27T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038243 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.038273 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.142158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.245699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349292 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349323 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.349334 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.452528 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.555674 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.658553 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.727649 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:56:43.652500938 +0000 UTC Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.747075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:30 crc kubenswrapper[4907]: E0127 18:06:30.747257 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.762678 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.865430 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.865994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.866673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969767 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:30 crc kubenswrapper[4907]: I0127 18:06:30.969834 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:30Z","lastTransitionTime":"2026-01-27T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.072872 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.073682 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176612 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.176810 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.280876 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.383904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.384038 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.384128 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487697 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.487748 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.590968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.591535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694294 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.694315 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.727953 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:38:34.48266556 +0000 UTC Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747628 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.747971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748129 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748280 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:31 crc kubenswrapper[4907]: E0127 18:06:31.748503 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.797147 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900700 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:31 crc kubenswrapper[4907]: I0127 18:06:31.900752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:31Z","lastTransitionTime":"2026-01-27T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003194 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.003234 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.105794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.106425 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.210947 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.226774 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.246361 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252218 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.252292 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.265578 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.269497 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.282455 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287362 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.287425 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.301838 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.308953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.322727 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.323146 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.324699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426772 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.426868 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530695 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530752 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530771 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.530857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634601 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.634643 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.728904 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:07:25.041195135 +0000 UTC Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736875 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.736988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.737001 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.747159 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:32 crc kubenswrapper[4907]: E0127 18:06:32.747344 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.839955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.839998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840009 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.840040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.943722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:32 crc kubenswrapper[4907]: I0127 18:06:32.944754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:32Z","lastTransitionTime":"2026-01-27T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.048829 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.074156 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.074390 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.074485 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:41.074458416 +0000 UTC m=+56.203741068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152541 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.152647 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.256307 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.359118 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.462193 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.565397 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668619 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.668668 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.729871 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:52:48.501216569 +0000 UTC Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747301 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747332 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.747600 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.747790 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.747914 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:33 crc kubenswrapper[4907]: E0127 18:06:33.748006 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.772963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773050 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.773092 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876224 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.876265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:33 crc kubenswrapper[4907]: I0127 18:06:33.978309 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:33Z","lastTransitionTime":"2026-01-27T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.080972 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.081096 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.183468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.285970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.286006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.286015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.287733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.287783 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.390633 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493880 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493900 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.493915 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.597469 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701294 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.701730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.702050 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.730536 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:11:08.226406603 +0000 UTC Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.748062 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:34 crc kubenswrapper[4907]: E0127 18:06:34.748238 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.806282 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909847 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:34 crc kubenswrapper[4907]: I0127 18:06:34.909869 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:34Z","lastTransitionTime":"2026-01-27T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013448 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.013494 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116297 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.116352 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.219226 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322760 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.322807 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425809 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425919 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.425980 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529777 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.529869 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632804 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.632858 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.730915 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:47:54.560604386 +0000 UTC Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.736352 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747193 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.747219 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747302 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747467 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:35 crc kubenswrapper[4907]: E0127 18:06:35.747713 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.771197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.788059 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.808657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.827939 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.839999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.840013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.850294 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.864426 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.880908 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.894504 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.916898 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.934430 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.942718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.942918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.943232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:35Z","lastTransitionTime":"2026-01-27T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.955600 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.970438 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.983201 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:35 crc kubenswrapper[4907]: I0127 18:06:35.997680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.011144 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.022899 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.037708 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.046810 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.149349 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.252946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253023 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253042 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.253103 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.355926 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459349 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.459383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.562452 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.665954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.666064 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.731761 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:13:15.197237451 +0000 UTC Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.747165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:36 crc kubenswrapper[4907]: E0127 18:06:36.747340 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.768992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769130 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.769148 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.872661 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976323 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:36 crc kubenswrapper[4907]: I0127 18:06:36.976366 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:36Z","lastTransitionTime":"2026-01-27T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.079124 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182464 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.182591 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.285418 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.388471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.419374 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.419795 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.419587357 +0000 UTC m=+84.548869979 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.492204 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525341 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525428 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.525498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525912 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525965 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.525994 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526062 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526108 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526070 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526210 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526275 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526305 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526104257 +0000 UTC m=+84.655386869 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526418 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526386276 +0000 UTC m=+84.655668918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526462 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.526434997 +0000 UTC m=+84.655717639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.526551 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:09.5265268 +0000 UTC m=+84.655809452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596173 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.596224 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699923 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.699970 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.732708 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:20:41.239384591 +0000 UTC Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747337 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747401 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.747336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.747687 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.747864 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:37 crc kubenswrapper[4907]: E0127 18:06:37.748042 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.802968 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907668 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:37 crc kubenswrapper[4907]: I0127 18:06:37.907857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:37Z","lastTransitionTime":"2026-01-27T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011700 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.011752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114331 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.114439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.217473 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.320598 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.422920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423042 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.423055 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526524 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.526651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630226 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630327 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.630343 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.733247 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:29:33.857948139 +0000 UTC Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.747821 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:38 crc kubenswrapper[4907]: E0127 18:06:38.747988 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835439 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835497 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.835527 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938696 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:38 crc kubenswrapper[4907]: I0127 18:06:38.938718 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:38Z","lastTransitionTime":"2026-01-27T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.041120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.143834 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246181 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.246378 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349769 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349844 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349862 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.349873 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453246 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.453262 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556224 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.556348 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659887 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659956 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659975 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.659989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.733875 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:02:45.914038029 +0000 UTC Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747454 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.747768 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.747481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.748001 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:39 crc kubenswrapper[4907]: E0127 18:06:39.748218 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.749610 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764682 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764701 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764726 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.764782 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.868754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:39 crc kubenswrapper[4907]: I0127 18:06:39.971781 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:39Z","lastTransitionTime":"2026-01-27T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075522 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075658 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.075675 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.111588 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.114453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.115828 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.145931 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.164591 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178056 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.178100 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.181763 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.202283 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.222805 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.240932 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.255760 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.268545 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279030 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279594 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.279676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.292497 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.304481 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.319109 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.330215 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.343618 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.358237 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.371629 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.382170 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.391608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.459118 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.469016 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.475316 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.484689 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.489022 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.499677 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.510677 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.525421 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.541666 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.554818 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.567547 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.579207 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587413 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587491 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.587503 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.596712 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.622459 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.644380 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.656741 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.666041 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.674381 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.686906 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.690971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.690999 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691025 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.691036 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.698132 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.734552 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:05:04.33079498 +0000 UTC Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.747098 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:40 crc kubenswrapper[4907]: E0127 18:06:40.747224 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.793492 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.896890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.896977 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:40 crc kubenswrapper[4907]: I0127 18:06:40.897061 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:40Z","lastTransitionTime":"2026-01-27T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.000721 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103827 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.103998 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.119535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.120652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/1.log" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.123847 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" exitCode=1 Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.123945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.124006 4907 scope.go:117] "RemoveContainer" containerID="b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.124897 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.125129 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.153304 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.163764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.163988 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.164063 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:06:57.164043975 +0000 UTC m=+72.293326587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.170668 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.186422 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.201945 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207570 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207618 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.207629 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.214397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.229320 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.245133 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.261200 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.278752 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.293667 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.308123 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.309890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.310006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.310097 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.331118 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b96354ffd61a1a1748d13d4699e0b1d2a9c9fce7598c79d410c07869bbe617ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:24Z\\\",\\\"message\\\":\\\"adbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\\nI0127 18:06:24.694725 6347 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:24.694734 6347 services_controller.go:360] Finished syncing service console on namespace openshift-console for network=default : 1.566197ms\\\\nI0127 18:06:24.694737 6347 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0127 18:06:24.694713 6347 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:24.694767 6347 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-operators for net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.354353 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.370883 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.384502 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.396651 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412290 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412599 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.412659 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.440229 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.515328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618229 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.618243 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721762 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.721949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.722024 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.735679 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:17:20.097687167 +0000 UTC Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.747211 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.747454 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.747911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.748110 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.748283 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:41 crc kubenswrapper[4907]: E0127 18:06:41.749212 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825329 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.825668 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:41 crc kubenswrapper[4907]: I0127 18:06:41.928989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:41Z","lastTransitionTime":"2026-01-27T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031919 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.031997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.032020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.032039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.131605 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134395 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134420 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.134439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.136435 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.136703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.148486 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.163511 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.188019 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.201574 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.215851 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.227550 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237258 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237296 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237338 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.237368 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.253253 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.263716 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.284914 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.299253 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.313647 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.327171 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.339397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.340651 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.355278 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.369680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.380368 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.391200 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443305 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.443436 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545935 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.545949 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.561967 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.562619 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.583579 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587669 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587741 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.587862 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.601079 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605634 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.605881 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.625290 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.628996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.629012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.629022 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.647054 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651077 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.651101 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.664873 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.665132 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666889 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.666970 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.737161 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:55:30.489115348 +0000 UTC Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.747698 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:42 crc kubenswrapper[4907]: E0127 18:06:42.747891 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769668 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.769719 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872361 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.872416 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974760 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974827 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:42 crc kubenswrapper[4907]: I0127 18:06:42.974864 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:42Z","lastTransitionTime":"2026-01-27T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077392 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077457 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.077493 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180392 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.180467 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284744 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284777 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.284796 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.388687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.491958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492055 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.492108 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.595522 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699625 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699669 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.699689 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.737826 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:24:47.932440394 +0000 UTC Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747427 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747506 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.747549 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747709 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747821 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:43 crc kubenswrapper[4907]: E0127 18:06:43.747999 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.803252 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905250 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:43 crc kubenswrapper[4907]: I0127 18:06:43.905328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:43Z","lastTransitionTime":"2026-01-27T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.007945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.110931 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213585 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.213616 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317365 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.317382 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420858 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.420872 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523771 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523786 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.523819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627964 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.627992 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731048 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731131 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.731145 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.739507 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:28:41.559494092 +0000 UTC Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.747906 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:44 crc kubenswrapper[4907]: E0127 18:06:44.748117 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.834925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835048 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.835067 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:44 crc kubenswrapper[4907]: I0127 18:06:44.938136 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:44Z","lastTransitionTime":"2026-01-27T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.048159 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.151856 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255960 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.255999 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.360211 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.463819 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.566997 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670487 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.670660 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.739774 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:36:13.549761736 +0000 UTC Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747372 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.747489 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.747726 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.747909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:45 crc kubenswrapper[4907]: E0127 18:06:45.748104 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.763365 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.773783 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.788247 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.805410 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.820632 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.835728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.862616 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876085 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.876153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.886158 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.904345 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.918601 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.937122 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.950526 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.964850 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979742 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979667 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.979959 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:45Z","lastTransitionTime":"2026-01-27T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:45 crc kubenswrapper[4907]: I0127 18:06:45.995231 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.010837 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.022995 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.038608 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.063258 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081572 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081609 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081637 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.081648 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184592 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.184659 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.287174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288393 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.288765 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.289544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393667 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.393781 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.496756 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.599319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702247 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.702410 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.740977 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:31:58.024132806 +0000 UTC Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.747338 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:46 crc kubenswrapper[4907]: E0127 18:06:46.747522 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805354 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.805455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908212 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:46 crc kubenswrapper[4907]: I0127 18:06:46.908230 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:46Z","lastTransitionTime":"2026-01-27T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011671 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011695 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.011714 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.114655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.114957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.115232 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.225252 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.328656 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329241 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.329531 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.432882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.433534 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.538876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.539115 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.539265 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642195 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.642279 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.742652 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:41:01.705211671 +0000 UTC Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.745190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747684 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.747835 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.747890 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.747969 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:47 crc kubenswrapper[4907]: E0127 18:06:47.748066 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.848600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.848958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849047 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849164 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.849257 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:47 crc kubenswrapper[4907]: I0127 18:06:47.952430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:47Z","lastTransitionTime":"2026-01-27T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055271 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.055330 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.156945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.157675 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.260544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.363882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.364332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.364511 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468727 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.468997 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.571944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572602 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.572982 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675395 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675785 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.675928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.676062 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.676202 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.744253 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:53:11.29771291 +0000 UTC Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.747328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:48 crc kubenswrapper[4907]: E0127 18:06:48.747605 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.778815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.779612 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883830 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.883945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.985906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:48 crc kubenswrapper[4907]: I0127 18:06:48.986514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:48Z","lastTransitionTime":"2026-01-27T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.089993 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193816 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.193922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.194005 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.296996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297569 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.297653 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.400724 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503296 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503326 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.503341 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606729 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.606772 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.709994 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.746059 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:30:41.79856 +0000 UTC Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747686 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747730 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.747788 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748069 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748128 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:49 crc kubenswrapper[4907]: E0127 18:06:49.748196 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.813581 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:49 crc kubenswrapper[4907]: I0127 18:06:49.918234 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:49Z","lastTransitionTime":"2026-01-27T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021820 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021832 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021848 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.021859 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.124947 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.125190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228747 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.228800 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331565 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331638 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.331650 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434123 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434172 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.434220 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.537468 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.640959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.641143 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745692 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.745741 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.746890 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:51:51.907062364 +0000 UTC Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.747116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:50 crc kubenswrapper[4907]: E0127 18:06:50.747396 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.848996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.849132 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:50 crc kubenswrapper[4907]: I0127 18:06:50.952149 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:50Z","lastTransitionTime":"2026-01-27T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.054987 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.055003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.055013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157421 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.157507 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260825 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.260953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364428 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.364457 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467166 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.467236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.570240 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673316 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.673376 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747435 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:58:17.742717141 +0000 UTC Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747474 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747508 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.747501 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.747774 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:51 crc kubenswrapper[4907]: E0127 18:06:51.748175 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776044 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776120 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776171 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.776190 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879394 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.879434 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:51 crc kubenswrapper[4907]: I0127 18:06:51.982660 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:51Z","lastTransitionTime":"2026-01-27T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.086608 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189683 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.189697 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293128 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.293156 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395303 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.395345 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.497927 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.497994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498028 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.498047 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.600337 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703652 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.703742 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.747277 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:52 crc kubenswrapper[4907]: E0127 18:06:52.747515 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.747621 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:50:41.415202085 +0000 UTC Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806718 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806737 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.806763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908860 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908903 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:52 crc kubenswrapper[4907]: I0127 18:06:52.908921 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:52Z","lastTransitionTime":"2026-01-27T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.001137 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.013146 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.017329 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.030508 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033580 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.033595 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.045228 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050722 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.050736 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.062776 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066861 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.066975 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.080269 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.080388 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.081994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.082008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.082018 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184187 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.184213 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287033 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287088 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.287123 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389263 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.389288 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491745 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491843 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.491859 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.594535 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697328 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.697394 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747684 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747755 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:21:26.726815581 +0000 UTC Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.747713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.747887 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.747986 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:53 crc kubenswrapper[4907]: E0127 18:06:53.748046 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.799990 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.800005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.800014 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903702 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903815 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:53 crc kubenswrapper[4907]: I0127 18:06:53.903874 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:53Z","lastTransitionTime":"2026-01-27T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.006520 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.109833 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212425 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.212464 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314926 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.314964 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417133 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.417158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.520448 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623092 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.623137 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725050 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.725135 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.747674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:54 crc kubenswrapper[4907]: E0127 18:06:54.747813 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.747859 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:45:45.398796095 +0000 UTC Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.826993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.827010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.827020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929276 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:54 crc kubenswrapper[4907]: I0127 18:06:54.929387 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:54Z","lastTransitionTime":"2026-01-27T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031914 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031945 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.031955 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134719 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.134822 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236782 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236847 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.236880 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339803 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339868 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339886 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.339929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.441963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442016 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.442040 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544140 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544156 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.544166 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647082 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.647145 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.747857 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748034 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.748088 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:07:27.679249826 +0000 UTC Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.748233 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748387 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.747800 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:55 crc kubenswrapper[4907]: E0127 18:06:55.748936 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750209 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.750228 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.771397 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.793637 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.807319 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.820047 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.832203 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.842340 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.853985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854056 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854078 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854091 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.854404 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.866108 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.878832 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.890974 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.905678 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.925894 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.950575 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956255 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.956280 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:55Z","lastTransitionTime":"2026-01-27T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.966949 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.985315 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:55 crc kubenswrapper[4907]: I0127 18:06:55.997012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.013415 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:56Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.027196 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:56Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.059957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060020 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060040 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.060080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162385 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.162430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265712 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265759 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.265798 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.369952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370438 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.370482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472431 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472477 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.472497 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575586 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.575688 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678443 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.678529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.747645 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:06:56 crc kubenswrapper[4907]: E0127 18:06:56.747934 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.748160 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:56 crc kubenswrapper[4907]: E0127 18:06:56.748241 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.749107 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:54:14.379692558 +0000 UTC Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.781418 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.884878 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987298 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:56 crc kubenswrapper[4907]: I0127 18:06:56.987383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:56Z","lastTransitionTime":"2026-01-27T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090504 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090581 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090592 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.090636 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193380 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.193430 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.247106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.247300 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.247429 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:07:29.247400112 +0000 UTC m=+104.376682734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295485 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.295590 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397923 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.397953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513522 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.513568 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.616364 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719304 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.719484 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747118 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.747195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747218 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747305 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:57 crc kubenswrapper[4907]: E0127 18:06:57.747464 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.750042 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:04:28.065301516 +0000 UTC Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.821521 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:57 crc kubenswrapper[4907]: I0127 18:06:57.923971 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:57Z","lastTransitionTime":"2026-01-27T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.025949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.025997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.026069 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128455 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.128463 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230672 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.230698 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332863 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.332905 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435674 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.435702 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537978 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.537993 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.538003 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640492 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640545 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640573 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.640583 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.743734 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.746979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:06:58 crc kubenswrapper[4907]: E0127 18:06:58.747099 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.751197 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:39:59.381038056 +0000 UTC Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.846353 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948665 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:58 crc kubenswrapper[4907]: I0127 18:06:58.948778 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:58Z","lastTransitionTime":"2026-01-27T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051749 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.051806 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154499 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154660 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.154710 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193621 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193672 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" exitCode=1 Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.193701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.194044 4907 scope.go:117] "RemoveContainer" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.207539 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.218598 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.236483 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.252443 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256836 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256905 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.256922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.257002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.257048 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.272634 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.287409 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.298742 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.308523 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.321265 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.331228 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.343386 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.356967 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359283 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.359292 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.367740 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.382524 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.393045 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.408380 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.421639 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.432435 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:06:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470824 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.470865 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.572959 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675463 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.675482 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747616 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747678 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.747737 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.747825 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:06:59 crc kubenswrapper[4907]: E0127 18:06:59.748044 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.751420 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:09:36.249288924 +0000 UTC Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.778314 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881168 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.881180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983802 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:06:59 crc kubenswrapper[4907]: I0127 18:06:59.983811 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:06:59Z","lastTransitionTime":"2026-01-27T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086865 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086903 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.086951 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.189851 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.198017 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.198067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.211149 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.221398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.233071 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.245649 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.258113 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.269264 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.280475 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.291108 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.292986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293035 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.293091 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.304481 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.322384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.334728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.349078 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.363191 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.376791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395543 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395623 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395640 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.395790 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.407735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.426987 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.442699 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498260 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.498286 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.600928 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.600988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601007 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.601047 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704315 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704384 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704397 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.704433 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.747958 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:00 crc kubenswrapper[4907]: E0127 18:07:00.748189 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.752144 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:30:52.299722405 +0000 UTC Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807795 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.807857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910789 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910806 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:00 crc kubenswrapper[4907]: I0127 18:07:00.910820 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:00Z","lastTransitionTime":"2026-01-27T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.012984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013094 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.013120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116178 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116198 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.116236 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218472 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.218515 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321052 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321125 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.321164 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.424729 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.528344 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631146 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631242 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631267 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.631375 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734785 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734801 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.734813 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747515 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747608 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.747706 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.747757 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.747926 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:01 crc kubenswrapper[4907]: E0127 18:07:01.748002 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.752457 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:49:41.346015523 +0000 UTC Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.837264 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939883 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:01 crc kubenswrapper[4907]: I0127 18:07:01.939924 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:01Z","lastTransitionTime":"2026-01-27T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042526 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042628 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.042705 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145347 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145403 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.145411 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248087 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.248224 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351536 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.351643 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.455419 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.558713 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662634 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.662649 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.747616 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:02 crc kubenswrapper[4907]: E0127 18:07:02.747870 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.753639 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:26:02.882215952 +0000 UTC Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765574 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765639 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.765673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869137 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869155 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.869198 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972408 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:02 crc kubenswrapper[4907]: I0127 18:07:02.972636 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:02Z","lastTransitionTime":"2026-01-27T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.075929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.075994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.076071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.179114 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197308 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197366 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.197414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.213717 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218913 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.218953 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.241002 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277416 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.277623 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.294389 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298280 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.298294 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.314912 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319629 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.319655 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.332256 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.332388 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334552 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.334653 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437012 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437075 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.437102 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540134 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.540188 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643210 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.643302 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746121 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746170 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.746218 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.747708 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.747841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.748031 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.748039 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.748136 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:03 crc kubenswrapper[4907]: E0127 18:07:03.748301 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.754282 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:40:21.121491901 +0000 UTC Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848857 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848878 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.848981 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:03 crc kubenswrapper[4907]: I0127 18:07:03.951361 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:03Z","lastTransitionTime":"2026-01-27T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054401 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.054432 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156596 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156615 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.156645 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258531 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258588 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258601 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.258627 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362770 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362805 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.362820 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.465857 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.568700 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671673 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671725 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.671754 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.747781 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:04 crc kubenswrapper[4907]: E0127 18:07:04.748235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.754449 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:56:10.269990817 +0000 UTC Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.764084 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.773976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774028 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774067 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.774084 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877666 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.877681 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980384 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980393 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980411 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:04 crc kubenswrapper[4907]: I0127 18:07:04.980420 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:04Z","lastTransitionTime":"2026-01-27T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082697 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082711 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.082750 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185899 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185976 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.185985 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.288838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392249 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392341 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.392386 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.494934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.494988 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495024 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.495039 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597716 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597729 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597749 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.597762 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700828 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.700840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748161 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748382 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748537 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748381 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.748176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:05 crc kubenswrapper[4907]: E0127 18:07:05.748754 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.755235 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:02:52.867434457 +0000 UTC Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.770345 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.788512 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805951 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.805971 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.810358 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.837051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.850680 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.884873 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.905549 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908649 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908741 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.908752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:05Z","lastTransitionTime":"2026-01-27T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.920204 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.936479 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.960875 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:05 crc kubenswrapper[4907]: I0127 18:07:05.979627 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.001134 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011717 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.011729 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.021111 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.036980 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.053954 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.073227 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.089395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.110544 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115065 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.115125 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.127480 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.218147 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321465 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.321676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424606 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.424740 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528142 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528287 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.528311 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.631979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632080 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.632097 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734937 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734963 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.734983 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.747241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:06 crc kubenswrapper[4907]: E0127 18:07:06.747426 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.755634 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:36:02.668413296 +0000 UTC Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837875 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837907 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.837922 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:06 crc kubenswrapper[4907]: I0127 18:07:06.941405 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:06Z","lastTransitionTime":"2026-01-27T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045225 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045379 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.045397 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149120 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149200 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.149245 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.252738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.253198 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356541 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.356589 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458941 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.458984 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.459002 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562705 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562774 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.562818 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.665939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.666071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747934 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.747945 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748097 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.748215 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748348 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:07 crc kubenswrapper[4907]: E0127 18:07:07.748427 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.756423 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:56:20.049021493 +0000 UTC Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769063 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769139 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.769151 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871693 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871788 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.871807 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975285 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:07 crc kubenswrapper[4907]: I0127 18:07:07.975312 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:07Z","lastTransitionTime":"2026-01-27T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079095 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079111 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.079170 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182549 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182624 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.182719 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.228585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.232354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.233651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.255464 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.273241 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285585 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.285607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.307442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.322299 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.337384 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.350227 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.360517 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.376821 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388402 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.388414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.393657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.412198 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.426977 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.441657 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.464225 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.474801 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.489442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490791 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.490803 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.499676 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.512354 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.522780 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.535735 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.592726 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695201 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.695420 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.747981 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:08 crc kubenswrapper[4907]: E0127 18:07:08.748152 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.757055 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:33:52.432561307 +0000 UTC Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798141 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798177 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798186 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798199 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.798210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901755 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:08 crc kubenswrapper[4907]: I0127 18:07:08.901850 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:08Z","lastTransitionTime":"2026-01-27T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.005610 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.108148 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211870 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.211971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.212001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.212019 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.239767 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.240826 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/2.log" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246341 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" exitCode=1 Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246408 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.246488 4907 scope.go:117] "RemoveContainer" containerID="14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.247524 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.247796 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.284125 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14fa517befe2b36df61f16458ff770ace19ff80136a13a80fa9d8f489e0800f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"message\\\":\\\"18:06:40.625672 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625699 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 18:06:40.625705 6548 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625754 6548 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jqfkt\\\\nI0127 18:06:40.625782 6548 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jqfkt in node crc\\\\nI0127 18:06:40.625771 6548 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:06:40.625848 6548 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Val\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.306074 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315910 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.315996 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.321993 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.346422 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.365664 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.388909 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.406054 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419636 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.419746 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.431624 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.447791 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.464498 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.480162 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.483491 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.483695 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.48365456 +0000 UTC m=+148.612937222 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.494400 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.507309 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.519342 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521887 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521901 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.521912 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.534777 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.547039 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.556302 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.564728 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.575306 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.585545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.587802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.586552 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588022 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588049 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588052 4907 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588063 4907 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588071 4907 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588123 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588102631 +0000 UTC m=+148.717385243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.587952 4907 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588301 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588270256 +0000 UTC m=+148.717552918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588014 4907 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588357 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588348898 +0000 UTC m=+148.717631590 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.588476 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.588462611 +0000 UTC m=+148.717745243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.624609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.727600 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747946 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747976 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.747949 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748099 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748256 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:09 crc kubenswrapper[4907]: E0127 18:07:09.748333 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.757376 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:13:27.464826909 +0000 UTC Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830180 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830197 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830230 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.830246 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933202 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933232 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:09 crc kubenswrapper[4907]: I0127 18:07:09.933243 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:09Z","lastTransitionTime":"2026-01-27T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036600 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036621 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036653 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.036673 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140633 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.140685 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244291 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244320 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.244342 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.252512 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.258090 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:10 crc kubenswrapper[4907]: E0127 18:07:10.258433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.274318 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.291426 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.310598 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.326414 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.343530 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348486 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348514 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.348602 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.359795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.388008 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.403117 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.418398 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.449894 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451929 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.451986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.452013 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.476583 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.492654 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.513776 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.527197 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.543918 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554460 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554493 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.554525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.557489 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.578074 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.591361 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.602854 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656784 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656851 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.656890 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.747515 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:10 crc kubenswrapper[4907]: E0127 18:07:10.747955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.757853 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:26:02.814091342 +0000 UTC Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759490 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759515 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.759525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862036 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862144 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862164 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.862175 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964245 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964365 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:10 crc kubenswrapper[4907]: I0127 18:07:10.964383 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:10Z","lastTransitionTime":"2026-01-27T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067594 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067657 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067675 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.067687 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170863 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.170927 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274268 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274353 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.274455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378024 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378163 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.378187 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481964 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.481997 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.482020 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.584994 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585099 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585134 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.585158 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.688478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748057 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748095 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.748195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.748877 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.749070 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:11 crc kubenswrapper[4907]: E0127 18:07:11.749264 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.758889 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:53:13.634605027 +0000 UTC Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791518 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.791541 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.894399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895236 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895616 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.895739 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.998613 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999217 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:11 crc kubenswrapper[4907]: I0127 18:07:11.999309 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:11Z","lastTransitionTime":"2026-01-27T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102534 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.102549 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205763 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.205790 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.326528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.326981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327121 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.327188 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.429965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.430080 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532468 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532505 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.532544 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635106 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.635117 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738723 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738747 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.738763 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.747263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:12 crc kubenswrapper[4907]: E0127 18:07:12.747404 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.759832 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:45:34.207482495 +0000 UTC Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842270 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842315 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842325 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.842356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.945721 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.946767 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.946908 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.947051 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:12 crc kubenswrapper[4907]: I0127 18:07:12.947195 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:12Z","lastTransitionTime":"2026-01-27T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.050465 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154093 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.154121 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263319 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.263946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.264074 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.264614 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.367954 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.367998 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368038 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.368071 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471302 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471840 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.471944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.472021 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526387 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526526 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.526550 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.550346 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556398 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556434 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556445 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556461 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.556473 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.571498 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577539 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.577634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.598778 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604778 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604911 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.604936 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.626704 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.631971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.632015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.632030 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.649631 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.649868 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651915 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651971 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.651983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.652002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.652015 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747422 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747516 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.747662 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.747581 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.748241 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:13 crc kubenswrapper[4907]: E0127 18:07:13.748294 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.759970 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:38:43.86760472 +0000 UTC Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760372 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760419 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760436 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.760478 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862910 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862931 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.862945 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966279 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:13 crc kubenswrapper[4907]: I0127 18:07:13.966297 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:13Z","lastTransitionTime":"2026-01-27T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070004 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070055 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.070077 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174035 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174551 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.174667 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278299 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278417 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.278504 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.380991 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381071 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381129 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.381152 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.483904 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484036 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.484092 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586090 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.586153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689084 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689178 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689208 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.689231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.747436 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:14 crc kubenswrapper[4907]: E0127 18:07:14.747641 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.761094 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:36:27.74474011 +0000 UTC Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792083 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792101 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792124 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.792140 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894344 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894391 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.894414 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997030 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997112 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997135 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:14 crc kubenswrapper[4907]: I0127 18:07:14.997152 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:14Z","lastTransitionTime":"2026-01-27T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100213 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100366 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.100392 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203206 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203309 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.203363 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306893 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306920 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.306938 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409859 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409908 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.409929 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.516388 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620109 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620190 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620204 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.620241 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723266 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723349 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.723401 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.747948 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.747995 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.748022 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748155 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748277 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:15 crc kubenswrapper[4907]: E0127 18:07:15.748396 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.761933 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:25:50.366935437 +0000 UTC Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.765465 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.798062 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.820540 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826191 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826215 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.826228 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.842100 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.860518 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.886217 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.919055 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.929959 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930034 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.930050 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:15Z","lastTransitionTime":"2026-01-27T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.941530 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.955064 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.969839 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.982012 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:15 crc kubenswrapper[4907]: I0127 18:07:15.993834 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.006328 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.022395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.034005 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035047 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035710 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.035795 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.049709 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.063051 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.076795 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.094231 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139008 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.139127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241807 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241824 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.241837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344899 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344916 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344943 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.344961 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448547 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.448656 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550856 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.550900 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654404 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.654451 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.747840 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:16 crc kubenswrapper[4907]: E0127 18:07:16.748144 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757727 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757781 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.757838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.763110 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:17:28.554729222 +0000 UTC Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862183 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862227 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.862244 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965077 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965149 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:16 crc kubenswrapper[4907]: I0127 18:07:16.965186 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:16Z","lastTransitionTime":"2026-01-27T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068451 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068508 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.068529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171432 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171510 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171540 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171610 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.171640 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274608 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.274628 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377330 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377406 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.377455 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480874 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480952 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480974 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.480991 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584627 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584674 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.584699 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688691 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688753 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.688833 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.747722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.747779 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.747897 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.748024 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.748180 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:17 crc kubenswrapper[4907]: E0127 18:07:17.748470 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.771904 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:40:52.17760234 +0000 UTC Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791703 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791748 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.791758 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897746 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897858 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897890 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:17 crc kubenswrapper[4907]: I0127 18:07:17.897922 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:17Z","lastTransitionTime":"2026-01-27T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.000981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.001010 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.001028 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104053 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104061 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104076 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.104087 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206495 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.206552 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309872 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.309910 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412918 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412936 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.412981 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516363 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.516380 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619389 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619409 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.619451 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723254 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723327 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.723403 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.747886 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:18 crc kubenswrapper[4907]: E0127 18:07:18.748083 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.772943 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:06:48.718792183 +0000 UTC Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826844 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826969 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.826989 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.930961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931043 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931070 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931117 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:18 crc kubenswrapper[4907]: I0127 18:07:18.931141 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:18Z","lastTransitionTime":"2026-01-27T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034483 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034590 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034617 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034650 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.034676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138482 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138493 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138513 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.138525 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241412 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241422 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241439 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.241450 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.344955 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345000 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345011 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345027 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.345043 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447573 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447629 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447644 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.447677 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551244 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551322 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551342 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.551389 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654407 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654481 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654502 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.654514 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.747547 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.747805 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.747872 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.748046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.748091 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:19 crc kubenswrapper[4907]: E0127 18:07:19.748185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756632 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756681 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756699 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756720 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.756736 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.774144 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:58:11.837025043 +0000 UTC Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.860441 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963185 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963213 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963220 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963233 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:19 crc kubenswrapper[4907]: I0127 18:07:19.963244 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:19Z","lastTransitionTime":"2026-01-27T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065257 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065284 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.065295 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168214 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168319 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168345 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.168411 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271470 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.271487 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374871 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.374896 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478127 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478203 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478254 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.478272 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.581706 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.684962 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685021 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685064 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.685082 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.747868 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:20 crc kubenswrapper[4907]: E0127 18:07:20.748091 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.775372 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:24:39.133503254 +0000 UTC Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787841 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787902 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787921 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.787977 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890458 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890507 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.890529 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993732 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993838 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:20 crc kubenswrapper[4907]: I0127 18:07:20.993856 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:20Z","lastTransitionTime":"2026-01-27T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098435 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098533 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.098666 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201764 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.201816 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.303958 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304014 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304032 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.304045 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406762 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406850 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.406893 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509680 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509707 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509733 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.509752 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613519 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.613609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716368 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716423 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716466 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.716485 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747379 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747420 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.747529 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.747773 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.748173 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:21 crc kubenswrapper[4907]: E0127 18:07:21.748442 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.775498 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:30:21.957651324 +0000 UTC Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818925 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818973 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.818986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.819005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.819018 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922274 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922361 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922441 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:21 crc kubenswrapper[4907]: I0127 18:07:21.922471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:21Z","lastTransitionTime":"2026-01-27T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.025794 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129360 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129390 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.129412 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232734 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232797 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232822 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.232840 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336272 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.336356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438614 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438659 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.438676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542374 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542672 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.542747 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646796 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.646836 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.748011 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:22 crc kubenswrapper[4907]: E0127 18:07:22.748214 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.749386 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:22 crc kubenswrapper[4907]: E0127 18:07:22.749762 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750335 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.750393 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.776191 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:04:57.643658389 +0000 UTC Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854222 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854295 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.854319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957469 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957620 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957655 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:22 crc kubenswrapper[4907]: I0127 18:07:22.957676 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:22Z","lastTransitionTime":"2026-01-27T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060503 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060544 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060578 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060603 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.060618 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164073 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164092 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164118 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.164139 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266709 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.266826 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369100 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369248 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.369356 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475884 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475909 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475939 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.475962 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579376 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579450 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579468 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.579513 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682833 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682897 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.682987 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684535 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684643 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.684725 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.704485 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709630 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709685 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.709707 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.725123 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730367 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730489 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.730517 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748094 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748217 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.748316 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748450 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.748691 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.753001 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757894 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757932 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757944 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.757973 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.773120 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.776505 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:54:17.272510077 +0000 UTC Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778098 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778179 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.778192 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.799165 4907 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a0d6a18c-7bf9-4fbd-a7bd-5cf328ac7f4f\\\",\\\"systemUUID\\\":\\\"0be71cc9-e3e6-47b6-b7c1-354451a0e2c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:23 crc kubenswrapper[4907]: E0127 18:07:23.799401 4907 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801184 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801278 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801307 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.801326 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905107 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905152 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905165 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905182 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:23 crc kubenswrapper[4907]: I0127 18:07:23.905197 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:23Z","lastTransitionTime":"2026-01-27T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008410 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008485 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.008609 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110979 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.110996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.111018 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.111034 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213570 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213645 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213690 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213714 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.213728 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316216 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316234 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.316276 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419686 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419735 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419750 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.419780 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522687 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522783 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522814 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.522836 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626045 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626054 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626069 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.626078 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729497 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729606 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729642 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729676 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.729701 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.748494 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:24 crc kubenswrapper[4907]: E0127 18:07:24.748758 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.777674 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:31:36.481462642 +0000 UTC Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832846 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832877 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.832899 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939488 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939506 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939529 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:24 crc kubenswrapper[4907]: I0127 18:07:24.939586 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:24Z","lastTransitionTime":"2026-01-27T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042453 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042576 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.042619 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.145986 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146058 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146079 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146105 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.146149 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248467 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248479 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248494 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.248506 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.351823 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.351895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352006 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.352075 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455757 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455835 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455853 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.455893 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559259 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559332 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559371 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559414 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.559439 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662628 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662708 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.662789 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.747859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748101 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748216 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:25 crc kubenswrapper[4907]: E0127 18:07:25.748352 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767888 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767948 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767965 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.767990 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.768056 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.768442 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.779424 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:22:54.561760609 +0000 UTC Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.785154 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9plnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"195088d8-09aa-4943-8825-ddd4cb453056\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d828b455733de8a42fb8e4c9282eda5df0c3727b4b930a01336290579c40ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-997jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9plnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.808593 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fgtpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985b7738-a27c-4276-8160-c2baa64ab7f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:06:58Z\\\",\\\"message\\\":\\\"2026-01-27T18:06:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd\\\\n2026-01-27T18:06:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_65ee4f08-b388-4dc6-8ea4-c21e720d41dd to /host/opt/cni/bin/\\\\n2026-01-27T18:06:13Z [verbose] multus-daemon started\\\\n2026-01-27T18:06:13Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:06:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nl2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fgtpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.839673 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:07:08Z\\\",\\\"message\\\":\\\"e Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679116 6980 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679466 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:07:08.679547 6980 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkx4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj9w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.864352 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0308222f-5ba4-4c3a-a0a5-d3f43e72e70d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756cc75ad25032b03c5c9e181d4f546e6182d00663e87a3855fbf61cce132b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://423c218fd7297a0cd3b506e8f315ddc1f7d1452d1f2eff8bf4b7c10eaa7990a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb458bfa3d0f13626dce6afefac1c3be1041ab2d80e6fb4f6a064fcc537407a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958a1a353344a3ae281c90efebecfced2b0c92b8df318e65c8e40352dfec4035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9a10e2408c93e68a72e58146cc0e24b29ac0d9cb36ec07137a0a3ae491019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ccbe7b2b50cda613569dcc9866d19ac898eeb9fadb75cafd3d0c8266b0101d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fff5a7df22fe77c85f7d2492ae4e852f447bc383e1f1abe4f7b9466b3afecf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8df338e6435cc07886aa179ca30b0a356233739bedcb915b5dfc8b12e394b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877358 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877433 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877456 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877496 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.877516 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.886689 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3186d58b5b8c812cd70d408e2dc1bee2e88f69c63ecd0e80facfcfe3a620948d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9906769e069617728cc645f376732086ab4071f33413ba18afd0395bf1b4002f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.907779 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e681429ec9d2f5d76c9340b738654f9728e006d0021999c782346a08abe5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.923782 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4rxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"317dc29e-e919-4bac-894d-e54b69538c31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3b25ecd11a8b0e9e808c1da22886fe16dcb8bc3c0783f5c505b1a76b0403e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t69cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4rxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.947538 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"722204a2-dbb1-4b08-909b-09fdea49b7a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://018fbc1dbaa1e5aba428fc76192279c3310ff6dcb1791493e65c18d33fc637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63b78743b890a51b78fd4b3b2cd4d533b36a271159129160ff499bea4f2aba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d3702eac044ac474ca3c1838bbb85659657d20019ad4206cea9e8c3136ce9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b23a0330705a278c52e156dcee049b93063b4d66ee231026b64f68eb232483fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ca5cc361f6fad66db4b6e6ebe2bed19d6465c4863d6b43238b1784bcefa072e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b36c690b9eaeb797264f12e63156fa41c60acc8e58559ce84a0d498768b76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52e54df2a2f288676948ae385ed11bea89b52d27824483e7eae37d2f6d4f25f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kncv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jqfkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.960835 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeaae2ee-c57b-4323-9d3c-563d87d85f08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2z5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981879 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981933 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981970 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981985 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:25Z","lastTransitionTime":"2026-01-27T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:25 crc kubenswrapper[4907]: I0127 18:07:25.981299 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3972e3bc-1760-4cb8-b2d0-6758a782c079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:06:05.307330 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:06:05.308712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:06:05.309612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2331010189/tls.crt::/tmp/serving-cert-2331010189/tls.key\\\\\\\"\\\\nI0127 18:06:05.859140 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:06:05.862986 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:06:05.863010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:06:05.863045 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:06:05.863055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:06:05.869407 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:06:05.869437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:06:05.869445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:06:05.869448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:06:05.869451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:06:05.869454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:06:05.869756 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:06:05.875934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.000395 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a10f3feac1e57e629261552ff0fec6fc8811c43eb2d9cae8400a9b467c329e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.018222 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.032859 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.046690 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fe1d896-28da-48d2-9a3e-e4154091a601\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82809139ee8c83d6a93e333553276cc4510959c0a9699186fb758bcaefb8314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ebec1e1cf2a0697165566f65cf9439329acd789c3660dd00eb56bbab560cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z99h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz9tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.066098 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3352fde-847c-41ed-96ac-408ed0c69a9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://616aa83e7e4bd5595d7686dbea3770418045b9c5431a0bac5b3a61686350daf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd92e4dd56696fea2e54d6d663e14e6ecd8fcacb3825f0649e45bc0a41593e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e706ef486aca95b366291c44990f3abc8420820a78cda01799aad8976ac142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4621491f2e677fa2c91d15f7d2dbecbb5dfa8da71a11ddf89b0d9216e438b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084446 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b284180-3b83-4111-ad49-c829a2bef7cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba949a5a3dbc832b8d656233d96ff0aebf288d3467d3b4af2efb7f3cd25e23d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb6c68b3f7bb6873ff4701ad23cd3342a79fbd1c841a3e39c6c8df5f14076e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084663 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084713 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084730 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084754 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.084771 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.105326 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe89c65-6f13-405a-b772-3eefd67e4d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8eecfb2dfbe8642ca3a9cf7e06600baca628a4c52740c38fe1cd796c75eb08ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd282a36fce9d209d38645bd33a1f618c00aa6a292057bc3a0275f976c6e3ca8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a42755563be0ebbc619ac5b5d5cda40cb3396dc0a49a93e8c5088b914497fdad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:05:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.121905 4907 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"437f8dd5-d37d-4b51-a08f-8c68b3bc038a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76af99f95e34232728380c58b8db7ba6476572a7b3ef065f6d995750e63fcc9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n59rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wgvjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.187938 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.187996 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188013 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.188065 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290662 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290736 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290761 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290792 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.290817 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.393906 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.393983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394001 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394026 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.394044 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497525 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497635 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497654 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.497734 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601895 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601968 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.601985 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.602015 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.602033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705739 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705756 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705779 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.705799 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.747443 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:26 crc kubenswrapper[4907]: E0127 18:07:26.747719 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.780284 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:27:33.654314233 +0000 UTC Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808352 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808427 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808474 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.808495 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911719 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911766 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:26 crc kubenswrapper[4907]: I0127 18:07:26.911785 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:26Z","lastTransitionTime":"2026-01-27T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015108 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015175 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015192 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015218 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.015237 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117324 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117359 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117369 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.117392 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220235 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220289 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220305 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220321 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.220333 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322282 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322373 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322400 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.322419 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425207 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425306 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425339 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425375 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.425395 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528500 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528517 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528540 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.528587 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632066 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632123 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632174 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.632196 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735546 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735611 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.735634 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.747948 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.747966 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748254 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.748307 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748409 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:27 crc kubenswrapper[4907]: E0127 18:07:27.748957 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.780916 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:27:16.21356946 +0000 UTC Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838597 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838670 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.838691 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941512 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941715 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941768 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:27 crc kubenswrapper[4907]: I0127 18:07:27.941791 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:27Z","lastTransitionTime":"2026-01-27T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045452 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045528 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045605 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.045625 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148740 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148800 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.148862 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251780 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251845 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251854 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251869 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.251888 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.354917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.354983 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355003 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355029 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.355048 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457876 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457942 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457961 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.457974 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561312 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561362 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561396 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.561408 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664205 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664225 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664253 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.664271 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.747405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:28 crc kubenswrapper[4907]: E0127 18:07:28.747620 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766462 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766587 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.766607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.781678 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:44:29.572101043 +0000 UTC Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870005 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870086 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870104 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.870153 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973829 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973849 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:28 crc kubenswrapper[4907]: I0127 18:07:28.973891 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:28Z","lastTransitionTime":"2026-01-27T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077338 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077405 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077424 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077449 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.077471 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181037 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181093 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181116 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181145 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.181166 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284583 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284678 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284698 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.284712 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.324389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.324792 4907 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.324958 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs podName:eeaae2ee-c57b-4323-9d3c-563d87d85f08 nodeName:}" failed. No retries permitted until 2026-01-27 18:08:33.324901136 +0000 UTC m=+168.454183788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs") pod "network-metrics-daemon-n2z5k" (UID: "eeaae2ee-c57b-4323-9d3c-563d87d85f08") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388348 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388511 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388537 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388626 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.388710 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491708 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491821 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.491884 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.595949 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596041 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596068 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.596127 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699161 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699221 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699239 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699262 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.699282 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.747655 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.747820 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.747933 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.748005 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.748231 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:29 crc kubenswrapper[4907]: E0127 18:07:29.748433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.781897 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:21:08.256907031 +0000 UTC Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803169 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803252 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803277 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803313 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.803335 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906110 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906176 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906196 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:29 crc kubenswrapper[4907]: I0127 18:07:29.906210 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:29Z","lastTransitionTime":"2026-01-27T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.024980 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025122 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025151 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025188 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.025227 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129334 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129415 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129437 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.129542 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232758 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232799 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232810 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232826 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.232838 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334664 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.334706 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437607 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437648 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437661 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437677 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.437688 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541097 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541157 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541171 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.541202 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644429 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644446 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644471 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.644490 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.747080 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:30 crc kubenswrapper[4907]: E0127 18:07:30.747296 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748459 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748521 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748538 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748589 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.748607 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.782884 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:28:42.231426672 +0000 UTC Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.851917 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852057 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852080 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.852120 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955223 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.955978 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.956113 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:30 crc kubenswrapper[4907]: I0127 18:07:30.956231 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:30Z","lastTransitionTime":"2026-01-27T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.060855 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061430 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061704 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.061885 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.062031 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165350 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165377 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165386 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165399 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.165409 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269275 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269336 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269356 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269381 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.269398 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371641 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371694 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371706 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371724 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.371735 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.473946 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.473992 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474002 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474022 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.474033 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577219 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577273 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577290 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577314 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.577332 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680881 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680940 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680957 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680981 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.680999 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.747828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.748040 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748283 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.748349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748712 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:31 crc kubenswrapper[4907]: E0127 18:07:31.748813 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.783090 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:40:56.331059721 +0000 UTC Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784738 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784798 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784817 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784842 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.784863 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888388 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888464 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888484 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888516 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.888534 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.992813 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993318 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993548 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.993839 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:31 crc kubenswrapper[4907]: I0127 18:07:31.994015 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:31Z","lastTransitionTime":"2026-01-27T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097081 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097132 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097147 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097167 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.097180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.199793 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200211 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200293 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.200354 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303256 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303351 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303364 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303382 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.303395 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406873 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406922 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406934 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406953 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.406967 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510039 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510102 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510119 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510143 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.510160 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613046 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613136 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613159 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613189 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.613214 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716743 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716811 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716834 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716866 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.716891 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.747480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:32 crc kubenswrapper[4907]: E0127 18:07:32.747777 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.784373 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:21:24.321946899 +0000 UTC Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820269 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820340 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820357 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820383 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.820402 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.923688 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924787 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924882 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924930 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:32 crc kubenswrapper[4907]: I0127 18:07:32.924962 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:32Z","lastTransitionTime":"2026-01-27T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028049 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028114 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028138 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028162 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.028180 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131017 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131091 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131126 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131160 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.131182 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234473 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234542 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234598 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234631 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.234654 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336689 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336775 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336794 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336819 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.336837 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439523 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439604 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439622 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439646 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.439663 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543418 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543480 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543501 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543530 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.543595 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647150 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647212 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647228 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647251 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.647319 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747052 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.747304 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747238 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747433 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:33 crc kubenswrapper[4907]: E0127 18:07:33.747609 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749444 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749520 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749532 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749547 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.749576 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.784686 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:12:16.268155149 +0000 UTC Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852378 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852442 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852454 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852478 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.852494 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947237 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947288 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947301 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947317 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:07:33 crc kubenswrapper[4907]: I0127 18:07:33.947328 4907 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:07:33Z","lastTransitionTime":"2026-01-27T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.001104 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps"] Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.001522 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.003607 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.004992 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.005105 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.005128 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.039602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n4rxh" podStartSLOduration=84.039545505 podStartE2EDuration="1m24.039545505s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.039321979 +0000 UTC m=+109.168604591" watchObservedRunningTime="2026-01-27 18:07:34.039545505 +0000 UTC m=+109.168828137" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075246 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075610 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.075825 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076037 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076163 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.076589 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jqfkt" podStartSLOduration=84.076575616 podStartE2EDuration="1m24.076575616s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.059206204 +0000 UTC m=+109.188488886" watchObservedRunningTime="2026-01-27 18:07:34.076575616 +0000 UTC m=+109.205858228" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.110547 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.110517448 podStartE2EDuration="1m27.110517448s" podCreationTimestamp="2026-01-27 18:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.109361485 +0000 UTC m=+109.238644117" watchObservedRunningTime="2026-01-27 18:07:34.110517448 +0000 UTC m=+109.239800070" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177736 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177838 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.177859 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.178850 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a08cad-2677-40b2-95d1-727093d151cc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.178911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.179158 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25a08cad-2677-40b2-95d1-727093d151cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.181511 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz9tb" podStartSLOduration=83.181486321 podStartE2EDuration="1m23.181486321s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.180393539 +0000 UTC m=+109.309676151" watchObservedRunningTime="2026-01-27 18:07:34.181486321 +0000 UTC m=+109.310768933" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.194659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a08cad-2677-40b2-95d1-727093d151cc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.200086 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a08cad-2677-40b2-95d1-727093d151cc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5fhps\" (UID: \"25a08cad-2677-40b2-95d1-727093d151cc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.238552 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.238532631 podStartE2EDuration="1m28.238532631s" podCreationTimestamp="2026-01-27 18:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.22432741 +0000 UTC m=+109.353610072" watchObservedRunningTime="2026-01-27 18:07:34.238532631 +0000 UTC m=+109.367815253" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.267315 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.267292293 podStartE2EDuration="1m26.267292293s" podCreationTimestamp="2026-01-27 18:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.253503974 +0000 UTC m=+109.382786606" watchObservedRunningTime="2026-01-27 18:07:34.267292293 +0000 UTC m=+109.396574915" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.267921 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podStartSLOduration=84.267915931 podStartE2EDuration="1m24.267915931s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.266693165 +0000 UTC m=+109.395975797" watchObservedRunningTime="2026-01-27 18:07:34.267915931 +0000 UTC m=+109.397198553" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.282731 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.282704648 podStartE2EDuration="54.282704648s" podCreationTimestamp="2026-01-27 18:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.281817213 +0000 UTC m=+109.411099835" watchObservedRunningTime="2026-01-27 18:07:34.282704648 +0000 UTC m=+109.411987260" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.309818 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fgtpz" podStartSLOduration=84.309793252 podStartE2EDuration="1m24.309793252s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.309472203 +0000 UTC m=+109.438754825" watchObservedRunningTime="2026-01-27 18:07:34.309793252 +0000 UTC m=+109.439075864" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.309982 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.309976197 podStartE2EDuration="30.309976197s" podCreationTimestamp="2026-01-27 18:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.293623134 +0000 UTC m=+109.422905766" watchObservedRunningTime="2026-01-27 18:07:34.309976197 +0000 UTC m=+109.439258809" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.319841 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.371518 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9plnb" podStartSLOduration=84.371484366 podStartE2EDuration="1m24.371484366s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:34.370215929 +0000 UTC m=+109.499498541" watchObservedRunningTime="2026-01-27 18:07:34.371484366 +0000 UTC m=+109.500767018" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.747733 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:34 crc kubenswrapper[4907]: E0127 18:07:34.748710 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.784886 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:52:44.005794636 +0000 UTC Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.784951 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 18:07:34 crc kubenswrapper[4907]: I0127 18:07:34.795056 4907 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.347388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" event={"ID":"25a08cad-2677-40b2-95d1-727093d151cc","Type":"ContainerStarted","Data":"d636975a3e7b7523a8a6bd3240cca4a1307763cb7e68116c4578ffc2d3a28180"} Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.347483 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" event={"ID":"25a08cad-2677-40b2-95d1-727093d151cc","Type":"ContainerStarted","Data":"ef2aa78c4235d9dc2e37fafb5f064e6b970349ac342f9df1ccef8615d05840be"} Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.370516 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5fhps" podStartSLOduration=85.370496641 podStartE2EDuration="1m25.370496641s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:35.369897094 +0000 UTC m=+110.499179736" watchObservedRunningTime="2026-01-27 18:07:35.370496641 +0000 UTC m=+110.499779253" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748053 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.748198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.749373 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.749654 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:35 crc kubenswrapper[4907]: I0127 18:07:35.751373 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.751672 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj9w2_openshift-ovn-kubernetes(a62f5e7d-70be-4705-a4b0-d5e4f531cfde)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" Jan 27 18:07:35 crc kubenswrapper[4907]: E0127 18:07:35.750153 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:36 crc kubenswrapper[4907]: I0127 18:07:36.747803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:36 crc kubenswrapper[4907]: E0127 18:07:36.748299 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747467 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747596 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:37 crc kubenswrapper[4907]: I0127 18:07:37.747604 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747733 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747878 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:37 crc kubenswrapper[4907]: E0127 18:07:37.747970 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:38 crc kubenswrapper[4907]: I0127 18:07:38.747602 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:38 crc kubenswrapper[4907]: E0127 18:07:38.747747 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747412 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747453 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747592 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:39 crc kubenswrapper[4907]: I0127 18:07:39.747618 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747779 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:39 crc kubenswrapper[4907]: E0127 18:07:39.747879 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:40 crc kubenswrapper[4907]: I0127 18:07:40.747937 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:40 crc kubenswrapper[4907]: E0127 18:07:40.748508 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748137 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:41 crc kubenswrapper[4907]: I0127 18:07:41.748249 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.748472 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.748679 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:41 crc kubenswrapper[4907]: E0127 18:07:41.749000 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:42 crc kubenswrapper[4907]: I0127 18:07:42.747801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:42 crc kubenswrapper[4907]: E0127 18:07:42.747938 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747528 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747575 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747748 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:43 crc kubenswrapper[4907]: I0127 18:07:43.747761 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747854 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:43 crc kubenswrapper[4907]: E0127 18:07:43.747959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:44 crc kubenswrapper[4907]: I0127 18:07:44.747717 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:44 crc kubenswrapper[4907]: E0127 18:07:44.747960 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.382440 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383289 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/0.log" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383376 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" exitCode=1 Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383422 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505"} Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.383479 4907 scope.go:117] "RemoveContainer" containerID="3c6c6b75906618b107ef1ed25c1cd08cfc7472058dff90a482069d747c0e7e0d" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.384097 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.384382 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.737315 4907 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.747728 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.747764 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.749658 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.750038 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:45 crc kubenswrapper[4907]: I0127 18:07:45.749800 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.750309 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:45 crc kubenswrapper[4907]: E0127 18:07:45.882259 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:46 crc kubenswrapper[4907]: I0127 18:07:46.390218 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:46 crc kubenswrapper[4907]: I0127 18:07:46.747294 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:46 crc kubenswrapper[4907]: E0127 18:07:46.747445 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.747977 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748167 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.748438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748523 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:47 crc kubenswrapper[4907]: I0127 18:07:47.748831 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:47 crc kubenswrapper[4907]: E0127 18:07:47.748980 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:48 crc kubenswrapper[4907]: I0127 18:07:48.748049 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:48 crc kubenswrapper[4907]: E0127 18:07:48.748276 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:48 crc kubenswrapper[4907]: I0127 18:07:48.749754 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.401443 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.404662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerStarted","Data":"5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3"} Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.405151 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.432023 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podStartSLOduration=99.431989622 podStartE2EDuration="1m39.431989622s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:07:49.430967242 +0000 UTC m=+124.560249874" watchObservedRunningTime="2026-01-27 18:07:49.431989622 +0000 UTC m=+124.561272274" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.719374 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.719525 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.719687 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747315 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:49 crc kubenswrapper[4907]: I0127 18:07:49.747380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747494 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747657 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:49 crc kubenswrapper[4907]: E0127 18:07:49.747780 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:50 crc kubenswrapper[4907]: E0127 18:07:50.883477 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747721 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:51 crc kubenswrapper[4907]: I0127 18:07:51.747950 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.747954 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748078 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748256 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:51 crc kubenswrapper[4907]: E0127 18:07:51.748412 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.747911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748059 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748271 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748323 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748423 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748462 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:53 crc kubenswrapper[4907]: I0127 18:07:53.748596 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:53 crc kubenswrapper[4907]: E0127 18:07:53.748638 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.747988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748033 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:55 crc kubenswrapper[4907]: I0127 18:07:55.748157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751674 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751771 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.751863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.752293 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:55 crc kubenswrapper[4907]: E0127 18:07:55.884823 4907 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:07:56 crc kubenswrapper[4907]: I0127 18:07:56.748693 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.438431 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.438525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4"} Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747605 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747698 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.747753 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747606 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.747901 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:57 crc kubenswrapper[4907]: I0127 18:07:57.747957 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.748010 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:57 crc kubenswrapper[4907]: E0127 18:07:57.748058 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747210 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747260 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747312 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:07:59 crc kubenswrapper[4907]: I0127 18:07:59.747211 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747409 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747621 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:07:59 crc kubenswrapper[4907]: E0127 18:07:59.747702 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2z5k" podUID="eeaae2ee-c57b-4323-9d3c-563d87d85f08" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747258 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747315 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747470 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.747849 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.750528 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.750708 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.751846 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752158 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752308 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:08:01 crc kubenswrapper[4907]: I0127 18:08:01.752587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.445731 4907 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.496204 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.497881 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.503700 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.504071 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.505189 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.505828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506018 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506040 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.506782 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507156 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.507384 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.512868 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.513732 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.513942 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.514414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.514590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.515146 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.518444 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5d442"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.519290 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.519928 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.520104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.521936 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.521962 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.544865 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545086 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.545480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.552790 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.553132 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.553683 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.555997 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556206 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.556739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.557845 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558198 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558695 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558780 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558832 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558918 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.558989 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559074 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559131 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559234 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559339 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559516 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.559729 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.560017 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.560240 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561036 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561246 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561313 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561258 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561490 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561259 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561862 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.561966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562062 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562127 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.562991 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563214 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563498 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563675 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.563865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.564103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.564314 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.565266 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.567662 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.568117 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.568952 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.569229 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.569902 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.570475 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.570894 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.571649 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.571880 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572011 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572170 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572222 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572356 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572601 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.572951 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.573022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.573958 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.576692 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.576906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.578676 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.579919 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.580431 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.580625 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.582095 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.582412 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h72cm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.601392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.604347 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.604832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.605053 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.605359 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.622095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627067 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627654 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.627976 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628133 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628274 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.628988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.629658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630071 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630314 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630437 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.630732 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631117 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631295 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.631910 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632164 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632617 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632768 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.632999 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633157 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633291 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.633319 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.634709 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635219 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635482 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635770 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.635899 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.638479 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.639620 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640179 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640274 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.640498 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.641093 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.642538 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.642929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643336 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643588 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643583 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643734 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643748 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643878 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.643986 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.644098 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.645912 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.645920 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646236 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646702 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.646846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.648760 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.649263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.649433 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.651664 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.652336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.652539 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653909 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.653982 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654000 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654015 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654046 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654061 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654076 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654157 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654172 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654204 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654224 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654240 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654256 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654272 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654289 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654306 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654321 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654351 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654397 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654468 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654499 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654610 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654625 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654656 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654671 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654685 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654700 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654852 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654883 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654896 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654911 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654955 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.654973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655035 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655153 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655264 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655293 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655339 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655440 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655474 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655516 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655542 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655598 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.655624 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.660637 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.662320 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.663568 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.664133 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.664255 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.666771 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.667451 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.667718 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.668220 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.668519 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.669114 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.670289 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.670717 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.671199 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.671508 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.672131 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.672470 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.673696 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674063 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674084 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.674685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.677012 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688238 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688379 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.688836 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.691263 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.696986 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.697657 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.698733 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.700233 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.700441 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.702269 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.703248 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.705248 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.706902 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.708383 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.709886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.711304 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.713871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.713966 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.714981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.721796 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.723985 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.725304 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.726482 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.736448 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.736797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.737787 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.738885 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.740193 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.740647 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.741428 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.742479 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.744108 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.746194 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.747910 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.749054 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.750222 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.751454 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.752743 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.754107 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.755220 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756438 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756530 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756656 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756685 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756711 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756789 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756851 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756923 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.756989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757037 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757109 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757157 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757180 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757204 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757227 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757250 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757351 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757375 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757403 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757602 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757687 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757738 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757756 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757765 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757852 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757934 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757961 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.757995 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758053 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758077 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758103 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758125 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758148 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758171 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758265 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758340 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758486 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758535 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758717 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.758744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759542 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.759905 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a88f65-9871-4372-b728-ed61f22642e4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.760013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-images\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.760193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-trusted-ca\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.761403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.761720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-node-pullsecrets\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762458 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-image-import-ca\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762518 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-config\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.762518 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763069 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-policies\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763071 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38363947-4768-44b8-b3fe-f7b5b482da55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764324 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-service-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764424 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764855 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.764943 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d359e7-9de4-4357-ae4c-8da07c1a880c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765039 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765248 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765607 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-serving-cert\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.763361 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.765948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-auth-proxy-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766084 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766137 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-client\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-config\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.766468 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.767171 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768107 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768146 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-config\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db7629bc-e5a1-44e1-9af4-ecc83acfda75-audit-dir\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768237 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768245 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f254819-bf2c-4c38-881f-8d12a0d56278-audit-dir\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.767193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b66d56fc-163d-469a-8a47-a3e1462b1af8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768490 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-metrics-tls\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768686 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.768977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.769021 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-service-ca-bundle\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.769781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770148 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b66d56fc-163d-469a-8a47-a3e1462b1af8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770301 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c678cbb-a03d-4ed8-85bd-befc2884454e-config\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f254819-bf2c-4c38-881f-8d12a0d56278-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770485 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-etcd-client\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.770963 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.771630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.771676 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.772032 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d359e7-9de4-4357-ae4c-8da07c1a880c-serving-cert\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.772980 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773295 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.773827 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.774599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.775002 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.776174 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.777322 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.778583 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bvqd5"] Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784385 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/db7629bc-e5a1-44e1-9af4-ecc83acfda75-encryption-config\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784402 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.784757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a88f65-9871-4372-b728-ed61f22642e4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785222 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c678cbb-a03d-4ed8-85bd-befc2884454e-serving-cert\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38363947-4768-44b8-b3fe-f7b5b482da55-serving-cert\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.785882 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786303 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-encryption-config\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-machine-approver-tls\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.786735 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f254819-bf2c-4c38-881f-8d12a0d56278-serving-cert\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.788150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.788299 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.794002 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.800925 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.820051 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.841077 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.860689 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.881450 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.902267 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.907832 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-metrics-certs\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.923091 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.941388 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.962170 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:08:04 crc kubenswrapper[4907]: I0127 18:08:04.981285 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.020839 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.025693 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-default-certificate\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.040430 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.061776 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.072694 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-stats-auth\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.082323 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.101597 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.121478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.141244 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.160957 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.182595 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.200842 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.220978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.243881 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.261988 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.283401 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.302390 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.322109 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.341680 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.360877 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.393884 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.402498 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.421965 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.441002 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.461421 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.492469 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.502340 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.521326 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.541215 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.561181 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.581693 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.600963 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.621190 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.641453 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.662126 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.679979 4907 request.go:700] Waited for 1.015244823s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.681923 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.702123 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.720708 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.741240 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.762239 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.780938 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.803250 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.821658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.841634 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.861784 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.881066 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.901407 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.920228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.941951 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.960956 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:08:05 crc kubenswrapper[4907]: I0127 18:08:05.982183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.001456 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.021205 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.042155 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.062478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.082247 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.103084 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.122720 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.142149 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.162013 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.180983 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.202357 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.221054 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.241975 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.261391 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.282207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.301069 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.320995 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.389813 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7vhr\" (UniqueName: \"kubernetes.io/projected/c2d359e7-9de4-4357-ae4c-8da07c1a880c-kube-api-access-k7vhr\") pod \"authentication-operator-69f744f599-qb9qr\" (UID: \"c2d359e7-9de4-4357-ae4c-8da07c1a880c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.407538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nqk\" (UniqueName: \"kubernetes.io/projected/db7629bc-e5a1-44e1-9af4-ecc83acfda75-kube-api-access-f7nqk\") pod \"apiserver-76f77b778f-8ljpb\" (UID: \"db7629bc-e5a1-44e1-9af4-ecc83acfda75\") " pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.424174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/bb98d017-ae04-4e9d-9b9f-dde9530b7acf-kube-api-access-f22q8\") pod \"machine-approver-56656f9798-5d442\" (UID: \"bb98d017-ae04-4e9d-9b9f-dde9530b7acf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.439094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"controller-manager-879f6c89f-9j78b\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.441250 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.461894 4907 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.463819 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.480636 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.485069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" Jan 27 18:08:06 crc kubenswrapper[4907]: W0127 18:08:06.500751 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb98d017_ae04_4e9d_9b9f_dde9530b7acf.slice/crio-79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8 WatchSource:0}: Error finding container 79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8: Status 404 returned error can't find the container with id 79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8 Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.518642 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwz8\" (UniqueName: \"kubernetes.io/projected/c8a31b60-14c7-4b73-a17f-60d101c0119b-kube-api-access-7gwz8\") pod \"downloads-7954f5f757-h79fx\" (UID: \"c8a31b60-14c7-4b73-a17f-60d101c0119b\") " pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.538678 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlqk\" (UniqueName: \"kubernetes.io/projected/1c678cbb-a03d-4ed8-85bd-befc2884454e-kube-api-access-8qlqk\") pod \"console-operator-58897d9998-bjfcf\" (UID: \"1c678cbb-a03d-4ed8-85bd-befc2884454e\") " pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.554875 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mbz\" (UniqueName: \"kubernetes.io/projected/f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f-kube-api-access-98mbz\") pod \"cluster-samples-operator-665b6dd947-6sp42\" (UID: \"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.585007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.612153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/9f254819-bf2c-4c38-881f-8d12a0d56278-kube-api-access-dspns\") pod \"apiserver-7bbb656c7d-xld9m\" (UID: \"9f254819-bf2c-4c38-881f-8d12a0d56278\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.617107 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6mm\" (UniqueName: \"kubernetes.io/projected/38363947-4768-44b8-b3fe-f7b5b482da55-kube-api-access-9m6mm\") pod \"openshift-config-operator-7777fb866f-78q6j\" (UID: \"38363947-4768-44b8-b3fe-f7b5b482da55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.632445 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.635648 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdvm\" (UniqueName: \"kubernetes.io/projected/b66d56fc-163d-469a-8a47-a3e1462b1af8-kube-api-access-2pdvm\") pod \"cluster-image-registry-operator-dc59b4c8b-747jk\" (UID: \"b66d56fc-163d-469a-8a47-a3e1462b1af8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.659345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"oauth-openshift-558db77b4-lg6ln\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.668366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.669213 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.676591 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwbk\" (UniqueName: \"kubernetes.io/projected/b3e7e0e7-2f37-4998-af7c-6e5d373a1264-kube-api-access-5qwbk\") pod \"machine-api-operator-5694c8668f-znwrp\" (UID: \"b3e7e0e7-2f37-4998-af7c-6e5d373a1264\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.699051 4907 request.go:700] Waited for 1.930337871s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.699608 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"route-controller-manager-6576b87f9c-7mcmq\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.705301 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.713771 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qb9qr"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.726442 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrpwn\" (UniqueName: \"kubernetes.io/projected/d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e-kube-api-access-jrpwn\") pod \"router-default-5444994796-h72cm\" (UID: \"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e\") " pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:06 crc kubenswrapper[4907]: W0127 18:08:06.740950 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2d359e7_9de4_4357_ae4c_8da07c1a880c.slice/crio-6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c WatchSource:0}: Error finding container 6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c: Status 404 returned error can't find the container with id 6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.741861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kw24\" (UniqueName: \"kubernetes.io/projected/28a88f65-9871-4372-b728-ed61f22642e4-kube-api-access-6kw24\") pod \"openshift-apiserver-operator-796bbdcf4f-bjql6\" (UID: \"28a88f65-9871-4372-b728-ed61f22642e4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.746735 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.757774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"console-f9d7485db-grwdr\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.774799 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.784005 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.784232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4wt\" (UniqueName: \"kubernetes.io/projected/d9ccc9d3-faa6-4c00-830b-2e1549a6725d-kube-api-access-cm4wt\") pod \"dns-operator-744455d44c-5z9d9\" (UID: \"d9ccc9d3-faa6-4c00-830b-2e1549a6725d\") " pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.798437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.800827 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.807834 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.815097 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.820869 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.826835 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.831014 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.841964 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.848939 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.851709 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.852707 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h79fx"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.862237 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.867803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.880942 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.903183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.919600 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.977192 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989597 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989718 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989790 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989863 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989917 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.989954 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990017 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990084 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990130 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ljpb"] Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990565 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990657 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990708 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990757 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990806 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.990829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992176 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992262 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992585 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992637 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.992782 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993003 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993139 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993186 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993209 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993317 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993711 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.993773 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996810 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996835 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.996970 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997427 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997541 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997619 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997826 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.997855 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998607 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:06 crc kubenswrapper[4907]: I0127 18:08:06.998672 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.998912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.998980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999085 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999248 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999312 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999351 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999423 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999613 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999727 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:06.999963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.000004 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.002700 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.502666595 +0000 UTC m=+142.631949257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: W0127 18:08:07.009211 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd427ba67_a9ef_41ef_a2f3_fbe9eb87a69e.slice/crio-f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02 WatchSource:0}: Error finding container f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02: Status 404 returned error can't find the container with id f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02 Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101607 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101789 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.101929 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102042 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102164 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102211 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102229 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102245 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102345 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102435 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102472 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102487 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102503 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102518 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102542 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102659 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102689 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.102721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.102803 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.602781399 +0000 UTC m=+142.732064011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104493 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104572 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104608 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104660 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104675 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104693 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104714 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104732 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104750 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104788 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104808 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104827 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104845 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104889 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104909 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104941 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104958 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.104990 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105027 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105046 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105330 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105376 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105391 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.105431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.106097 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.106170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ed825a-5a7b-454e-80f7-5cfa3d459032-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.108192 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.109656 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.110136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce158a5-7aba-4844-97ef-733b55d1694e-config-volume\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.113662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-config\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.113752 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.114331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-config\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.115421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.116717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d480c-01ea-4ec4-b238-16e70bb9caff-serving-cert\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.117429 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-tmpfs\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.117508 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/decaba3c-d32c-4a1d-b413-52c195883560-signing-cabundle\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.119094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.120944 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-config\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.121601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea5acd47-9e68-4600-beff-4ad9454dde7a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.122004 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-service-ca\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.124876 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46368914-416a-4849-9652-9c3ddae03429-trusted-ca\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.125298 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ed825a-5a7b-454e-80f7-5cfa3d459032-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.125909 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.127213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.129602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.130397 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/831e1c4c-ecd4-4617-ab4a-37acc328a062-images\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.131284 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d480c-01ea-4ec4-b238-16e70bb9caff-config\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.136513 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.136888 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-webhook-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.137148 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bjfcf"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.139758 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178c40d2-9468-43b5-b33b-f95b60268091-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.140166 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf23b11-96d6-4c77-8145-b7928844bd5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.140885 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf23b11-96d6-4c77-8145-b7928844bd5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141011 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-profile-collector-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.141680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.142152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ce158a5-7aba-4844-97ef-733b55d1694e-metrics-tls\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.143104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5acd47-9e68-4600-beff-4ad9454dde7a-proxy-tls\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.143700 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.144878 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-serving-cert\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.145388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178c40d2-9468-43b5-b33b-f95b60268091-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162211 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162291 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162425 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d667690f-b387-424c-b130-e50277eaa0c4-srv-cert\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162466 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/decaba3c-d32c-4a1d-b413-52c195883560-signing-key\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.162654 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/831e1c4c-ecd4-4617-ab4a-37acc328a062-proxy-tls\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.163445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/486be3bf-a27f-4a44-97f3-751b782bee1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/981b1dce-6375-4c49-9b16-144c98fc886c-etcd-client\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/434d6d34-127a-4de6-8f5c-6ea67008f70a-srv-cert\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.164381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46368914-416a-4849-9652-9c3ddae03429-metrics-tls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.167338 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.168415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d77196-c327-47c3-8713-d23038a08e13-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.172167 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-bound-sa-token\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.174134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-apiservice-cert\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.184633 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.190407 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7ln\" (UniqueName: \"kubernetes.io/projected/178c40d2-9468-43b5-b33b-f95b60268091-kube-api-access-xc7ln\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2gtn\" (UID: \"178c40d2-9468-43b5-b33b-f95b60268091\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.207873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208012 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208102 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208310 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208370 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208482 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.208916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-registration-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209003 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-mountpoint-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-socket-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-csi-data-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.209406 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5f465d65-342c-410f-9374-d8c5ac6f03e0-plugins-dir\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.209642 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.709629182 +0000 UTC m=+142.838911794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.217075 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-certs\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.227085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5997de10-6cbe-4099-aa7f-4f50effd0c4e-cert\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.227244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8w6\" (UniqueName: \"kubernetes.io/projected/d667690f-b387-424c-b130-e50277eaa0c4-kube-api-access-mf8w6\") pod \"olm-operator-6b444d44fb-lfqhn\" (UID: \"d667690f-b387-424c-b130-e50277eaa0c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.231088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f19550a4-d60c-4d8b-ae24-8b43c7b83736-node-bootstrap-token\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.240834 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmqq\" (UniqueName: \"kubernetes.io/projected/3d3d480c-01ea-4ec4-b238-16e70bb9caff-kube-api-access-6gmqq\") pod \"service-ca-operator-777779d784-xzht6\" (UID: \"3d3d480c-01ea-4ec4-b238-16e70bb9caff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.257045 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daf23b11-96d6-4c77-8145-b7928844bd5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9kf4x\" (UID: \"daf23b11-96d6-4c77-8145-b7928844bd5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.264731 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.268912 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.282053 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-znwrp"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.282986 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7gj\" (UniqueName: \"kubernetes.io/projected/42d77196-c327-47c3-8713-d23038a08e13-kube-api-access-tb7gj\") pod \"control-plane-machine-set-operator-78cbb6b69f-7v8cj\" (UID: \"42d77196-c327-47c3-8713-d23038a08e13\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.297246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn29f\" (UniqueName: \"kubernetes.io/projected/ea5acd47-9e68-4600-beff-4ad9454dde7a-kube-api-access-xn29f\") pod \"machine-config-controller-84d6567774-x42x8\" (UID: \"ea5acd47-9e68-4600-beff-4ad9454dde7a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.309278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.309595 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.809543839 +0000 UTC m=+142.938826461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.310140 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.310934 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.810821337 +0000 UTC m=+142.940103949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.322088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9zs\" (UniqueName: \"kubernetes.io/projected/d0ed825a-5a7b-454e-80f7-5cfa3d459032-kube-api-access-mx9zs\") pod \"kube-storage-version-migrator-operator-b67b599dd-wwwc7\" (UID: \"d0ed825a-5a7b-454e-80f7-5cfa3d459032\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.331000 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.341724 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.346161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn456\" (UniqueName: \"kubernetes.io/projected/6781da2d-2096-43fc-857d-d46734c50e16-kube-api-access-cn456\") pod \"migrator-59844c95c7-rv75f\" (UID: \"6781da2d-2096-43fc-857d-d46734c50e16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.358409 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78q6j"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.360352 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqkf\" (UniqueName: \"kubernetes.io/projected/6ce158a5-7aba-4844-97ef-733b55d1694e-kube-api-access-dhqkf\") pod \"dns-default-4tcrf\" (UID: \"6ce158a5-7aba-4844-97ef-733b55d1694e\") " pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.380056 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"marketplace-operator-79b997595-pn59x\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.382910 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" Jan 27 18:08:07 crc kubenswrapper[4907]: W0127 18:08:07.396187 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38363947_4768_44b8_b3fe_f7b5b482da55.slice/crio-558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef WatchSource:0}: Error finding container 558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef: Status 404 returned error can't find the container with id 558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.396844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw2pn\" (UniqueName: \"kubernetes.io/projected/8ca4e1e6-2eaa-436c-a083-0d33fe87c756-kube-api-access-pw2pn\") pod \"multus-admission-controller-857f4d67dd-6qsv8\" (UID: \"8ca4e1e6-2eaa-436c-a083-0d33fe87c756\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.399245 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.408517 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.411489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.411653 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.911628511 +0000 UTC m=+143.040911123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.411710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.412049 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:07.912041534 +0000 UTC m=+143.041324136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.486262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbf6d5b6-a4d1-4c8b-a111-9802cec24aab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bz48v\" (UID: \"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.486718 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4479b1ff-dfc5-4b7e-9b25-8472bcd58f56-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j9xmt\" (UID: \"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.487088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mktxp\" (UniqueName: \"kubernetes.io/projected/434d6d34-127a-4de6-8f5c-6ea67008f70a-kube-api-access-mktxp\") pod \"catalog-operator-68c6474976-85nxl\" (UID: \"434d6d34-127a-4de6-8f5c-6ea67008f70a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.506871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.506890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxsd\" (UniqueName: \"kubernetes.io/projected/486be3bf-a27f-4a44-97f3-751b782bee1f-kube-api-access-6lxsd\") pod \"package-server-manager-789f6589d5-tb79g\" (UID: \"486be3bf-a27f-4a44-97f3-751b782bee1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.508905 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.512282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.512337 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"d2a5d5dce677965b17c0ca35c20daae0415842ae46a59912803764ae0ae6a316"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513305 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.513395 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.013374843 +0000 UTC m=+143.142657455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.513949 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.514279 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.014267809 +0000 UTC m=+143.143550411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517128 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517162 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.517763 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"32f9fc7b4aa47ac4989c51f777b9f45caaf74a3f6d839b65c03c029afd8ca470"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.520376 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.526207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerStarted","Data":"53f1bb78246a95f04a0e3a59320d7de5b66a380634a406b2deaad462424ff23c"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.526841 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.527483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"collect-profiles-29492280-hkhf5\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.532417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" event={"ID":"1c678cbb-a03d-4ed8-85bd-befc2884454e","Type":"ContainerStarted","Data":"d99c210a2afd1a8dd3ab7ad3937e9daba804e9a0fbf6ebffd2362282c67ee2e1"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.533041 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.536751 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.536779 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.537684 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.537707 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.538058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"7e11b6f7056136bab0dac9410bba8d691c4dd67358da145faaeef6657053eabb"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.539627 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerStarted","Data":"e90ccbb7d1f9506a1d7c5832c29bc196054837999c5a0a011ef29e54c9ff8054"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.539812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p58ls\" (UniqueName: \"kubernetes.io/projected/46368914-416a-4849-9652-9c3ddae03429-kube-api-access-p58ls\") pod \"ingress-operator-5b745b69d9-l4hv6\" (UID: \"46368914-416a-4849-9652-9c3ddae03429\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.547655 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.548935 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h72cm" event={"ID":"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e","Type":"ContainerStarted","Data":"4a63025614479ee19d91ddae3c53b6b1b161f48d3ae54e551048bae3e81386a3"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.548960 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h72cm" event={"ID":"d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e","Type":"ContainerStarted","Data":"f364b8885ae815deea073e4c77d017094684040b0072b957b0f5f5e3807acc02"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.552783 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" event={"ID":"28a88f65-9871-4372-b728-ed61f22642e4","Type":"ContainerStarted","Data":"6f4c2bf3bee5eb016a2fe2297cdf16be3247579fde93f61857aa0e5fd2f98c42"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.554615 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.557694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"558daeeea5f2cbdade78ae93e4b5358a46b3484e63e0b0e9305a26d16f1121ef"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.559074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m7r5\" (UniqueName: \"kubernetes.io/projected/7ca8f687-0e6e-4df7-8dc1-0bb597588b6d-kube-api-access-2m7r5\") pod \"packageserver-d55dfcdfc-nrdnf\" (UID: \"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.560328 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerStarted","Data":"4a8097cce43ecee42c97c1d9ab5869697b268e0b34ef8036d5f9d6948ff49dc9"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.562289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"7bac43dbe37ef69aab73ceb84da67c628af00be47c258c1c351f33f09618fc07"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568121 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"8879a2adf5d3fbe3e0ecf787399134e695e87c642d46480f76194b3a13bbe9f6"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.568132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" event={"ID":"bb98d017-ae04-4e9d-9b9f-dde9530b7acf","Type":"ContainerStarted","Data":"79e9d9e5874ad3fcba55ec581ddb50a5771b97d8ad8307740e22252df81492e8"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.569919 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.578419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.578465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"6f9c8d88539a6808f0797fdfd9e7f88c6f05f953590a40792865ee706324087c"} Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.581229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jz9k\" (UniqueName: \"kubernetes.io/projected/decaba3c-d32c-4a1d-b413-52c195883560-kube-api-access-9jz9k\") pod \"service-ca-9c57cc56f-flpjm\" (UID: \"decaba3c-d32c-4a1d-b413-52c195883560\") " pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.583299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.593523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.604000 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.616095 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.616124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62sfv\" (UniqueName: \"kubernetes.io/projected/981b1dce-6375-4c49-9b16-144c98fc886c-kube-api-access-62sfv\") pod \"etcd-operator-b45778765-q8qbc\" (UID: \"981b1dce-6375-4c49-9b16-144c98fc886c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.618647 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.618692 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5z9d9"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.624356 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.625695 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.125658527 +0000 UTC m=+143.254941139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.626170 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.628530 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.128513651 +0000 UTC m=+143.257796263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.634908 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.635137 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.637245 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvzq\" (UniqueName: \"kubernetes.io/projected/831e1c4c-ecd4-4617-ab4a-37acc328a062-kube-api-access-9cvzq\") pod \"machine-config-operator-74547568cd-85kkw\" (UID: \"831e1c4c-ecd4-4617-ab4a-37acc328a062\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.650299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjx8g\" (UniqueName: \"kubernetes.io/projected/f19550a4-d60c-4d8b-ae24-8b43c7b83736-kube-api-access-cjx8g\") pod \"machine-config-server-bvqd5\" (UID: \"f19550a4-d60c-4d8b-ae24-8b43c7b83736\") " pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.650427 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.657632 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgcr\" (UniqueName: \"kubernetes.io/projected/5997de10-6cbe-4099-aa7f-4f50effd0c4e-kube-api-access-tfgcr\") pod \"ingress-canary-qtfgw\" (UID: \"5997de10-6cbe-4099-aa7f-4f50effd0c4e\") " pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.666087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.676255 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.690580 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.696221 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt242\" (UniqueName: \"kubernetes.io/projected/5f465d65-342c-410f-9374-d8c5ac6f03e0-kube-api-access-tt242\") pod \"csi-hostpathplugin-l59wn\" (UID: \"5f465d65-342c-410f-9374-d8c5ac6f03e0\") " pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.722279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xzht6"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.727345 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.727723 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.227700057 +0000 UTC m=+143.356982669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.727855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.730842 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.23082592 +0000 UTC m=+143.360108532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.733435 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.752858 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qtfgw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.759599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvqd5" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.813058 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.825605 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.829614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.829736 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.329717137 +0000 UTC m=+143.458999749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.830009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.830284 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.330277644 +0000 UTC m=+143.459560256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.850721 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.902433 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn"] Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.929775 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.929843 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 18:08:07 crc kubenswrapper[4907]: I0127 18:08:07.931044 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:07 crc kubenswrapper[4907]: E0127 18:08:07.931460 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.431441598 +0000 UTC m=+143.560724210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.018666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.033123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.033515 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.533501039 +0000 UTC m=+143.662783651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.134816 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.135214 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.635195359 +0000 UTC m=+143.764477971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.139748 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.163582 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.239589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.240173 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.740155176 +0000 UTC m=+143.869437788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.340716 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.341044 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.841016551 +0000 UTC m=+143.970299163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.342501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.345981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4tcrf"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.374039 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.389837 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-flpjm"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.414349 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.441638 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.443198 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:08.943185186 +0000 UTC m=+144.072467798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.539357 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podStartSLOduration=118.539336932 podStartE2EDuration="1m58.539336932s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.533481958 +0000 UTC m=+143.662764570" watchObservedRunningTime="2026-01-27 18:08:08.539336932 +0000 UTC m=+143.668619544" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.543884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.544186 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.044166325 +0000 UTC m=+144.173448937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.587866 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.595366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" event={"ID":"28a88f65-9871-4372-b728-ed61f22642e4","Type":"ContainerStarted","Data":"990fbac1ae65dd5c009c4ec7618eb00ba55a0f5e47ef13c2055df017e2eb8f65"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.596834 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" event={"ID":"3d3d480c-01ea-4ec4-b238-16e70bb9caff","Type":"ContainerStarted","Data":"9b6a6e0d3557e333aa6ebc3f09b5aeadb700791a26b82b9b31492ffc5a9e2a83"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.601181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvqd5" event={"ID":"f19550a4-d60c-4d8b-ae24-8b43c7b83736","Type":"ContainerStarted","Data":"aec38663e8358c263b3871f77568e546efde6179ff58c52e07c1980a44d087db"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.607453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" event={"ID":"daf23b11-96d6-4c77-8145-b7928844bd5e","Type":"ContainerStarted","Data":"7d891d167b39c7dc384ec00211c7278855d3352fe73863bf1aa6bfb697a86351"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.612222 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qsv8"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.614858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"87b7ad9b3a9ce5e3199bc7b27d8d27ddbefb2f3ab58c3ef32127f866fb2e00bf"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.626280 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"f1c988b6bdba0916d98a1428a297da63f92395f1b3b10fc521f23ae90ce2273c"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.629861 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" event={"ID":"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab","Type":"ContainerStarted","Data":"001ba9c6a78fd8a0b8e310382e548650245bfd340ae5d0347a220dc47a737b5b"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.630704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerStarted","Data":"ab8e4f87a2ffbe236cf7a8b9faa6044f734bcd783a8ccf483230babd8b2d0aab"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.631445 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" event={"ID":"178c40d2-9468-43b5-b33b-f95b60268091","Type":"ContainerStarted","Data":"c7087ef47743d40ae48aec5afab43cef6549f9764b0752cf7289d5e27de0e427"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.633137 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" event={"ID":"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56","Type":"ContainerStarted","Data":"8b21840fe73506f9ce9256b1be11ad3a0391c72b45cbea4c1dbd87ae175b38d7"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.634655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.636070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"b8952e73f83f3de3560dfe525a09e2a1044efd2da33cce2c4c3d904ba116c7f5"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.639221 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerStarted","Data":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.639310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerStarted","Data":"4f4e687b11dd2ca7eb21e3c540aa81cbaa9c488161aa4b888533995942e8fa1a"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.640533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" event={"ID":"d667690f-b387-424c-b130-e50277eaa0c4","Type":"ContainerStarted","Data":"f58df230b00f18699127081b816a50d8cd0c814c5b3bca7b53e9bdf6c934eee1"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.643128 4907 generic.go:334] "Generic (PLEG): container finished" podID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerID="f06976dfcd4d145633eb1bc145b26b5d9bd8dea20c28526e934bbfc3f6bde8ff" exitCode=0 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.643378 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerDied","Data":"f06976dfcd4d145633eb1bc145b26b5d9bd8dea20c28526e934bbfc3f6bde8ff"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.645992 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.649286 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.149271126 +0000 UTC m=+144.278553738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.649999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"438542bcaf81b0c512cc8a170693d4ff4fdb06e0cff8121934c77b1b1bc61114"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.655984 4907 generic.go:334] "Generic (PLEG): container finished" podID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerID="b92b7c2e43573dc8728d927bcac289f984bfbe45c4a3fe1432917c4917be66f5" exitCode=0 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.656104 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerDied","Data":"b92b7c2e43573dc8728d927bcac289f984bfbe45c4a3fe1432917c4917be66f5"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.657084 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-h79fx" podStartSLOduration=118.657068567 podStartE2EDuration="1m58.657068567s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.655918483 +0000 UTC m=+143.785201095" watchObservedRunningTime="2026-01-27 18:08:08.657068567 +0000 UTC m=+143.786351179" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.658281 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerStarted","Data":"ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.659313 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.659402 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.666056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerStarted","Data":"764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.669868 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" event={"ID":"1c678cbb-a03d-4ed8-85bd-befc2884454e","Type":"ContainerStarted","Data":"66d15642d3d727c638457f6bc9f91c0efdde17fb6615afcd4b939c46af04e15f"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.670393 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.670427 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.672102 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" event={"ID":"decaba3c-d32c-4a1d-b413-52c195883560","Type":"ContainerStarted","Data":"fcfbfad960f1a107e7b6ca84067670b60a3eef9dfe481aa3d4549fcfe71c6cfd"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.675540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"26cf2d10f39448594d8ec1b0ecc447533892109d73f76448a5c077f2335c5d11"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.677366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" event={"ID":"42d77196-c327-47c3-8713-d23038a08e13","Type":"ContainerStarted","Data":"c6299b2cc40813b33ac19a245a355c70688d86aa8a66a7eec2b95e4040aadd6f"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680053 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" event={"ID":"b66d56fc-163d-469a-8a47-a3e1462b1af8","Type":"ContainerStarted","Data":"8c51ad7b9f37a6e7fa7e5aa89b67c1dfe374160dfa7f0dde2ba26e75fc5fb6d4"} Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680823 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.680878 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.737115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.739698 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.746805 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.747532 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.247483513 +0000 UTC m=+144.376766125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.747907 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.755425 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.255401338 +0000 UTC m=+144.384683950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.803590 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h72cm" podStartSLOduration=118.803532002 podStartE2EDuration="1m58.803532002s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:08.775682148 +0000 UTC m=+143.904964760" watchObservedRunningTime="2026-01-27 18:08:08.803532002 +0000 UTC m=+143.932814614" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.809861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.814182 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.815999 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.819047 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.850162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.850523 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.350502493 +0000 UTC m=+144.479785105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.854484 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.854543 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 18:08:08 crc kubenswrapper[4907]: W0127 18:08:08.874282 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea5acd47_9e68_4600_beff_4ad9454dde7a.slice/crio-3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484 WatchSource:0}: Error finding container 3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484: Status 404 returned error can't find the container with id 3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484 Jan 27 18:08:08 crc kubenswrapper[4907]: W0127 18:08:08.877174 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca8f687_0e6e_4df7_8dc1_0bb597588b6d.slice/crio-18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105 WatchSource:0}: Error finding container 18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105: Status 404 returned error can't find the container with id 18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105 Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.889521 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.902370 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8qbc"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.951097 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qtfgw"] Jan 27 18:08:08 crc kubenswrapper[4907]: E0127 18:08:08.952089 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.452074819 +0000 UTC m=+144.581357431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.952277 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.953593 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l59wn"] Jan 27 18:08:08 crc kubenswrapper[4907]: I0127 18:08:08.975772 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw"] Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.007327 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5997de10_6cbe_4099_aa7f_4f50effd0c4e.slice/crio-2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0 WatchSource:0}: Error finding container 2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0: Status 404 returned error can't find the container with id 2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0 Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.020651 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f465d65_342c_410f_9374_d8c5ac6f03e0.slice/crio-ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b WatchSource:0}: Error finding container ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b: Status 404 returned error can't find the container with id ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b Jan 27 18:08:09 crc kubenswrapper[4907]: W0127 18:08:09.033003 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod831e1c4c_ecd4_4617_ab4a_37acc328a062.slice/crio-9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7 WatchSource:0}: Error finding container 9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7: Status 404 returned error can't find the container with id 9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7 Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.052771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.053180 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.553159252 +0000 UTC m=+144.682441864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.158456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.158817 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.658802039 +0000 UTC m=+144.788084651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.212906 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podStartSLOduration=119.21288791 podStartE2EDuration="1m59.21288791s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.212169489 +0000 UTC m=+144.341452101" watchObservedRunningTime="2026-01-27 18:08:09.21288791 +0000 UTC m=+144.342170522" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.256787 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podStartSLOduration=119.256764159 podStartE2EDuration="1m59.256764159s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.252500862 +0000 UTC m=+144.381783474" watchObservedRunningTime="2026-01-27 18:08:09.256764159 +0000 UTC m=+144.386046771" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.259931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.260284 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.760262502 +0000 UTC m=+144.889545114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.361241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.361679 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.861662474 +0000 UTC m=+144.990945086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.371707 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d442" podStartSLOduration=119.371687481 podStartE2EDuration="1m59.371687481s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.340946451 +0000 UTC m=+144.470229063" watchObservedRunningTime="2026-01-27 18:08:09.371687481 +0000 UTC m=+144.500970093" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.462225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.462407 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.962375365 +0000 UTC m=+145.091657977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.462849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.463266 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:09.963248881 +0000 UTC m=+145.092531493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.542219 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bjql6" podStartSLOduration=119.542192998 podStartE2EDuration="1m59.542192998s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.540413405 +0000 UTC m=+144.669696037" watchObservedRunningTime="2026-01-27 18:08:09.542192998 +0000 UTC m=+144.671475610" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.591068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.591581 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.091541119 +0000 UTC m=+145.220823731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.693722 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.694152 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.194120315 +0000 UTC m=+145.323402927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.719827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"9c23b06384c8796b65db167f2ddf6d24f041ca9332650ec970c8918cbbd96aa7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.733250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"7394209dadc4db9fff250a093b11268de96228180a44ce714a4ee786da97c7d7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.743110 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"45e9f53b4855567813ba17c6a76068635f9f7993467db5063abdc60aaa9036cd"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" event={"ID":"434d6d34-127a-4de6-8f5c-6ea67008f70a","Type":"ContainerStarted","Data":"356df355c8127a88218540166db9f749a519a94b16bf1b196497ca93d12953da"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745095 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" event={"ID":"434d6d34-127a-4de6-8f5c-6ea67008f70a","Type":"ContainerStarted","Data":"c4e6fa1620ba5175973c824748e1401fdecac7f0e0317ccf17b38b04dcd9c542"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.745937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.753738 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.753795 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.782230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerStarted","Data":"048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.782270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvqd5" event={"ID":"f19550a4-d60c-4d8b-ae24-8b43c7b83736","Type":"ContainerStarted","Data":"1be134f3bfbb3dc60a4950829058b394c29cdc79afc1b884fd9097939196b1bc"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.801342 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podStartSLOduration=118.801319868 podStartE2EDuration="1m58.801319868s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.801193535 +0000 UTC m=+144.930476147" watchObservedRunningTime="2026-01-27 18:08:09.801319868 +0000 UTC m=+144.930602480" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.801753 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.801819 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.301805203 +0000 UTC m=+145.431087815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.811412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.812111 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.312092767 +0000 UTC m=+145.441375379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.826447 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bvqd5" podStartSLOduration=5.8264235509999995 podStartE2EDuration="5.826423551s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.823510295 +0000 UTC m=+144.952792897" watchObservedRunningTime="2026-01-27 18:08:09.826423551 +0000 UTC m=+144.955706163" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.848728 4907 csr.go:261] certificate signing request csr-bg9tz is approved, waiting to be issued Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.849815 4907 csr.go:257] certificate signing request csr-bg9tz is issued Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.858731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"a0a5c68d34aa2349d3f473e6ebcae9439f2dc066fca2284c2289910b04a0d052"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.865682 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:09 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:09 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:09 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.865788 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.892038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" event={"ID":"b66d56fc-163d-469a-8a47-a3e1462b1af8","Type":"ContainerStarted","Data":"368ea393733f1a4c1077c38caa10663b2ce22f7cddeb3632d1b4728e87864a5c"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.894067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" event={"ID":"3d3d480c-01ea-4ec4-b238-16e70bb9caff","Type":"ContainerStarted","Data":"8a4373e23021954161d7f4a727d5a43d9b095b9295620195b1203d502f066d42"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.904999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" event={"ID":"981b1dce-6375-4c49-9b16-144c98fc886c","Type":"ContainerStarted","Data":"e8792302a3e200cd9d7cc033f9b5e511edcc72f5c3187b6dd09723e3bf83589f"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.915840 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.916263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-747jk" podStartSLOduration=119.91624621 podStartE2EDuration="1m59.91624621s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.915270301 +0000 UTC m=+145.044552903" watchObservedRunningTime="2026-01-27 18:08:09.91624621 +0000 UTC m=+145.045528822" Jan 27 18:08:09 crc kubenswrapper[4907]: E0127 18:08:09.917379 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.417355063 +0000 UTC m=+145.546637675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.918057 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"3de40f5725b574d578f5e60d79903db4c39e57ae08d9edf33ba1a29ffab5a484"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.937693 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerStarted","Data":"5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c"} Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.939837 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.941203 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.941241 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 18:08:09 crc kubenswrapper[4907]: I0127 18:08:09.943252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" event={"ID":"decaba3c-d32c-4a1d-b413-52c195883560","Type":"ContainerStarted","Data":"8112c09c6fd4a12759a13e8be41678b9498286bbd7122b30c9adada2bcb74e23"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.009720 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xzht6" podStartSLOduration=119.009697987 podStartE2EDuration="1m59.009697987s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:09.973128124 +0000 UTC m=+145.102410736" watchObservedRunningTime="2026-01-27 18:08:10.009697987 +0000 UTC m=+145.138980599" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.011861 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-flpjm" podStartSLOduration=119.01185388 podStartE2EDuration="1m59.01185388s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.005212224 +0000 UTC m=+145.134494836" watchObservedRunningTime="2026-01-27 18:08:10.01185388 +0000 UTC m=+145.141136492" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.013315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"751f6790eddcfff181547cb7090e8c80fd9fdf4c4aa3c45b341c6ab12bb2cee7"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.014669 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.017018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.020246 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.520230218 +0000 UTC m=+145.649512830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.026197 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podStartSLOduration=119.026177814 podStartE2EDuration="1m59.026177814s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.025620558 +0000 UTC m=+145.154903180" watchObservedRunningTime="2026-01-27 18:08:10.026177814 +0000 UTC m=+145.155460426" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.029571 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.029617 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.030922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"18d7be0786a62aab310d598b8700b9ae5d4126f078caf285754349a4454e4105"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.031807 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.033100 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.033125 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.044940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"ba73a6f2772be08ccc40da6054d654e0f4612ea8c21539a6ad8af140a245776b"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.061262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" event={"ID":"daf23b11-96d6-4c77-8145-b7928844bd5e","Type":"ContainerStarted","Data":"6736e245c631ab752552155875f3d8bed66efeb20071accd516574ec3c53c9da"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.072479 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podStartSLOduration=119.072459164 podStartE2EDuration="1m59.072459164s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.050327759 +0000 UTC m=+145.179610381" watchObservedRunningTime="2026-01-27 18:08:10.072459164 +0000 UTC m=+145.201741776" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.073621 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podStartSLOduration=119.073613858 podStartE2EDuration="1m59.073613858s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.071412683 +0000 UTC m=+145.200695325" watchObservedRunningTime="2026-01-27 18:08:10.073613858 +0000 UTC m=+145.202896470" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.082806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" event={"ID":"b3e7e0e7-2f37-4998-af7c-6e5d373a1264","Type":"ContainerStarted","Data":"6552446c5b4b19439a108a98e88ffb461a2c1e3e996f35ac7a40da27c783fd05"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.092826 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" event={"ID":"d0ed825a-5a7b-454e-80f7-5cfa3d459032","Type":"ContainerStarted","Data":"27aeacd9ac0b4134e47ebe6a06655e85d25b03cdfff3285ca2f5ff0cb845e200"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.100024 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9kf4x" podStartSLOduration=120.10000184 podStartE2EDuration="2m0.10000184s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.099191616 +0000 UTC m=+145.228474228" watchObservedRunningTime="2026-01-27 18:08:10.10000184 +0000 UTC m=+145.229284462" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.104073 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" event={"ID":"fbf6d5b6-a4d1-4c8b-a111-9802cec24aab","Type":"ContainerStarted","Data":"97f692e300917600042d52f72fab797a3d8d5bc1cca3c2f563b92cdddd9ea24d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.115475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" event={"ID":"42d77196-c327-47c3-8713-d23038a08e13","Type":"ContainerStarted","Data":"b66117fad3c7b38c73cadb10b3ac15033d3b42ac68331268ac63456de3c6b9ae"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.120699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.121937 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.621913288 +0000 UTC m=+145.751195900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.126918 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podStartSLOduration=119.126904276 podStartE2EDuration="1m59.126904276s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.122145535 +0000 UTC m=+145.251428167" watchObservedRunningTime="2026-01-27 18:08:10.126904276 +0000 UTC m=+145.256186888" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.150404 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"c3037282fd2c948713bcb1fb5de0e6880b822524d1ef4ee62860d63dcc553e41"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.152173 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" event={"ID":"4479b1ff-dfc5-4b7e-9b25-8472bcd58f56","Type":"ContainerStarted","Data":"271a93d46916e2598f357502f32c17324daeb6631a24e3788b69844dfac7c454"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.153702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"f7bbcb0ec3067846d3bed3cdc9a8fde003e4cd4d011cd3657e3f646ac9b6876e"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.156063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"61ec2e1254000a48ca140c481bf406a9161e0e16da9e86559031e5ebc467417c"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.162839 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bz48v" podStartSLOduration=120.162815359 podStartE2EDuration="2m0.162815359s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.162009735 +0000 UTC m=+145.291292347" watchObservedRunningTime="2026-01-27 18:08:10.162815359 +0000 UTC m=+145.292097971" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.169512 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"72dfded737a3b5402054537e151e18c454369edd7634d4cdbb665a1aec580b1d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.183434 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" event={"ID":"d667690f-b387-424c-b130-e50277eaa0c4","Type":"ContainerStarted","Data":"933c2dd6f7e717ec6abace0a5e717c50f793a554ccfd3b64e57446afeb7ccc72"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.186305 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.187345 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.187381 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.193955 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" event={"ID":"178c40d2-9468-43b5-b33b-f95b60268091","Type":"ContainerStarted","Data":"a4ca03c4c2426d4159d8f34134cadb220757b80f28156a08285f10c201bce70b"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.200463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qtfgw" event={"ID":"5997de10-6cbe-4099-aa7f-4f50effd0c4e","Type":"ContainerStarted","Data":"2a44aeb685fac8a49398cd4e329c21e65faaa03809aeff212999e12ce304a1f0"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.214976 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-znwrp" podStartSLOduration=119.214957903 podStartE2EDuration="1m59.214957903s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.185344676 +0000 UTC m=+145.314627288" watchObservedRunningTime="2026-01-27 18:08:10.214957903 +0000 UTC m=+145.344240515" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.216915 4907 generic.go:334] "Generic (PLEG): container finished" podID="38363947-4768-44b8-b3fe-f7b5b482da55" containerID="61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d" exitCode=0 Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.220038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerDied","Data":"61f8aa33026aa73cee0be39f4ed156775e988a1bf641f3de35f9c4e6b9f36e7d"} Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224440 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224509 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.224800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225083 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225167 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.225403 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.226304 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.726290058 +0000 UTC m=+145.855572670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.245888 4907 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lg6ln container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.245942 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.246051 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.246171 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.249230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7v8cj" podStartSLOduration=119.249212256 podStartE2EDuration="1m59.249212256s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.212874111 +0000 UTC m=+145.342156723" watchObservedRunningTime="2026-01-27 18:08:10.249212256 +0000 UTC m=+145.378494868" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.277766 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podStartSLOduration=119.277741391 podStartE2EDuration="1m59.277741391s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.2760171 +0000 UTC m=+145.405299712" watchObservedRunningTime="2026-01-27 18:08:10.277741391 +0000 UTC m=+145.407024003" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.279708 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2gtn" podStartSLOduration=120.279697449 podStartE2EDuration="2m0.279697449s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.250664249 +0000 UTC m=+145.379946861" watchObservedRunningTime="2026-01-27 18:08:10.279697449 +0000 UTC m=+145.408980071" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.308241 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j9xmt" podStartSLOduration=120.308217303 podStartE2EDuration="2m0.308217303s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.30170968 +0000 UTC m=+145.430992292" watchObservedRunningTime="2026-01-27 18:08:10.308217303 +0000 UTC m=+145.437499915" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.329014 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.330443 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.83042336 +0000 UTC m=+145.959705972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.364620 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qtfgw" podStartSLOduration=6.36319128 podStartE2EDuration="6.36319128s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.327931807 +0000 UTC m=+145.457214409" watchObservedRunningTime="2026-01-27 18:08:10.36319128 +0000 UTC m=+145.492473892" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.379585 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" podStartSLOduration=120.379542294 podStartE2EDuration="2m0.379542294s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.364951192 +0000 UTC m=+145.494233804" watchObservedRunningTime="2026-01-27 18:08:10.379542294 +0000 UTC m=+145.508824906" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.430987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.431420 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:10.93140233 +0000 UTC m=+146.060684942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.443486 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podStartSLOduration=120.443466887 podStartE2EDuration="2m0.443466887s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.42095725 +0000 UTC m=+145.550239862" watchObservedRunningTime="2026-01-27 18:08:10.443466887 +0000 UTC m=+145.572749499" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.492435 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-grwdr" podStartSLOduration=120.492409075 podStartE2EDuration="2m0.492409075s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:10.446117635 +0000 UTC m=+145.575400257" watchObservedRunningTime="2026-01-27 18:08:10.492409075 +0000 UTC m=+145.621691697" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.532324 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.532748 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.032712588 +0000 UTC m=+146.161995210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.532889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.533401 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.033371908 +0000 UTC m=+146.162654520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.634619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.634733 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.134711878 +0000 UTC m=+146.263994490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.634902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.635214 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.135207552 +0000 UTC m=+146.264490164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.736772 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.736974 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.236933784 +0000 UTC m=+146.366216396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.737199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.737483 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.23747105 +0000 UTC m=+146.366753662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.837932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.838461 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.338442709 +0000 UTC m=+146.467725311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.850855 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 18:03:09 +0000 UTC, rotation deadline is 2026-10-22 23:57:43.249100769 +0000 UTC Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.850934 4907 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6437h49m32.398170141s for next certificate rotation Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.854511 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:10 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:10 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:10 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.854602 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:10 crc kubenswrapper[4907]: I0127 18:08:10.940202 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:10 crc kubenswrapper[4907]: E0127 18:08:10.940643 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.440625463 +0000 UTC m=+146.569908075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.041619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.041832 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.541800018 +0000 UTC m=+146.671082630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.041939 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.042346 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.542331234 +0000 UTC m=+146.671613846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.143806 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.144011 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.643979973 +0000 UTC m=+146.773262585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.144115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.144577 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.644545629 +0000 UTC m=+146.773828241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.223204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"aed3f87f05b661174fc31cac2eb6f54fc32677dac4ae688ee496b5c8e8e7ce13"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.223250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" event={"ID":"ea5acd47-9e68-4600-beff-4ad9454dde7a","Type":"ContainerStarted","Data":"25e833908f081118d717b03f2b19f8aaf225ea8ed86c0ee59090108622487371"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.226006 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" event={"ID":"38363947-4768-44b8-b3fe-f7b5b482da55","Type":"ContainerStarted","Data":"8a47adad5e917e30457b41fed58f3ef8e65c07845f3812ee84f0a45d505023e6"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.226456 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.227972 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6sp42" event={"ID":"f7f7dadf-dfb5-4370-9a56-5d1cde7cc77f","Type":"ContainerStarted","Data":"04e4e680c43122d51096c187a384cfc29fedf39b745cda95753364e4b1496a2a"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.230666 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"506b31ee155e1bb451634dee66e8943aa67e6462728cf44d654c8d646b0ca2f3"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.230724 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" event={"ID":"6781da2d-2096-43fc-857d-d46734c50e16","Type":"ContainerStarted","Data":"e2f891df33a718082dedb90fed0acc1f20496b6a841938f2b4fbabf5817a9341"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.232228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qtfgw" event={"ID":"5997de10-6cbe-4099-aa7f-4f50effd0c4e","Type":"ContainerStarted","Data":"70200604599a9563eaa08a7723ed1afc4736073249499a7c0657bb946942275b"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.234100 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.235154 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.235191 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.236062 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"f2666cfd77e1b691a60e5ec9529ca2d92988056a8afe19f485b55f4341b8afca"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.236109 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" event={"ID":"8ca4e1e6-2eaa-436c-a083-0d33fe87c756","Type":"ContainerStarted","Data":"4b7a80dc8ce421404587ea370c5379fd09f8a7f8a2e8aa219a6df7b1d653d4ea"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.237650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"d449f31194a913b1848bfc1a9395fea50eba99ab526b5e1290f17d71441602c7"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.237678 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" event={"ID":"831e1c4c-ecd4-4617-ab4a-37acc328a062","Type":"ContainerStarted","Data":"4708299511e8a5d9f150554b1504c61d714ad11cc142f42df40448eb5a48c33f"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.239475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" event={"ID":"981b1dce-6375-4c49-9b16-144c98fc886c","Type":"ContainerStarted","Data":"a8a7e68a1e2f50bdf7f17e555af8e56e9db1199f2e494c3ddbedfc61cd93cc03"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.241088 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" event={"ID":"d9ccc9d3-faa6-4c00-830b-2e1549a6725d","Type":"ContainerStarted","Data":"61bd66d5cc2d8b65e7e6fa0c72b92eb278365f98ac17f4debeb4cf95efa59393"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243635 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.243690 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.244850 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.245003 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.744981103 +0000 UTC m=+146.874263715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.245161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.245451 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.745440696 +0000 UTC m=+146.874723308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.245949 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerStarted","Data":"52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.248760 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"02b070fc56aba7b64f6d5a1e5f16be5a2e96728797d79c1df27f0c3466b01b80"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.248802 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" event={"ID":"db7629bc-e5a1-44e1-9af4-ecc83acfda75","Type":"ContainerStarted","Data":"bf4a4256435b0fd4a3c356ecbc3ebd238ce11e688210a2f920cf617bb8cf34c0"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.250701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4tcrf" event={"ID":"6ce158a5-7aba-4844-97ef-733b55d1694e","Type":"ContainerStarted","Data":"38b4eed8264775bbbf3e8d3d6a05bea19d2a64c42a151e40f70f518d125e2b0f"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.251160 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.253436 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" event={"ID":"46368914-416a-4849-9652-9c3ddae03429","Type":"ContainerStarted","Data":"5960d5c470a01bc353a08f53d7146c8b2eff8357abebb583a1c08e8d5e2efeb9"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"444f3c824c48a375f8581dcc1eb8f5a798f2d0be5ed7bd3ff86803a8f65034ee"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255586 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" event={"ID":"486be3bf-a27f-4a44-97f3-751b782bee1f","Type":"ContainerStarted","Data":"1b38c60f52621140e0c9a91375769eb94c0aab0298a05cb5008c3033b682f381"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.255835 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.258887 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" event={"ID":"9f254819-bf2c-4c38-881f-8d12a0d56278","Type":"ContainerStarted","Data":"4983d966fe225895264f26786e87f61b467c9409a1bd1deac4339cca6b1e1108"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.260803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" event={"ID":"d0ed825a-5a7b-454e-80f7-5cfa3d459032","Type":"ContainerStarted","Data":"7a6095c566ba193c683ad99a96c93380ccb46307a1a30611da0a364ff4f7a25c"} Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.262034 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.262090 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.265750 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x42x8" podStartSLOduration=121.265736657 podStartE2EDuration="2m1.265736657s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.263546982 +0000 UTC m=+146.392829594" watchObservedRunningTime="2026-01-27 18:08:11.265736657 +0000 UTC m=+146.395019269" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.269981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.280381 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.329808 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85kkw" podStartSLOduration=121.329774523 podStartE2EDuration="2m1.329774523s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.32835427 +0000 UTC m=+146.457636882" watchObservedRunningTime="2026-01-27 18:08:11.329774523 +0000 UTC m=+146.459057135" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.331230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wwwc7" podStartSLOduration=120.331224285 podStartE2EDuration="2m0.331224285s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.303193826 +0000 UTC m=+146.432476438" watchObservedRunningTime="2026-01-27 18:08:11.331224285 +0000 UTC m=+146.460506897" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.349068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.349237 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.849198468 +0000 UTC m=+146.978481080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.349886 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.353772 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.853761373 +0000 UTC m=+146.983043975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.399830 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rv75f" podStartSLOduration=120.399801785 podStartE2EDuration="2m0.399801785s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.376743693 +0000 UTC m=+146.506026305" watchObservedRunningTime="2026-01-27 18:08:11.399801785 +0000 UTC m=+146.529084387" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.419088 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4tcrf" podStartSLOduration=7.419061226 podStartE2EDuration="7.419061226s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.401066703 +0000 UTC m=+146.530349315" watchObservedRunningTime="2026-01-27 18:08:11.419061226 +0000 UTC m=+146.548343838" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.449872 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-q8qbc" podStartSLOduration=121.449855557 podStartE2EDuration="2m1.449855557s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.448230349 +0000 UTC m=+146.577512961" watchObservedRunningTime="2026-01-27 18:08:11.449855557 +0000 UTC m=+146.579138169" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.452452 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.453158 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:11.953115324 +0000 UTC m=+147.082397936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.473950 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podStartSLOduration=120.47392961 podStartE2EDuration="2m0.47392961s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.472182908 +0000 UTC m=+146.601465520" watchObservedRunningTime="2026-01-27 18:08:11.47392961 +0000 UTC m=+146.603212232" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.503324 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" podStartSLOduration=121.503306339 podStartE2EDuration="2m1.503306339s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.501001321 +0000 UTC m=+146.630283943" watchObservedRunningTime="2026-01-27 18:08:11.503306339 +0000 UTC m=+146.632588951" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.531979 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5z9d9" podStartSLOduration=121.531955957 podStartE2EDuration="2m1.531955957s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.53102941 +0000 UTC m=+146.660312022" watchObservedRunningTime="2026-01-27 18:08:11.531955957 +0000 UTC m=+146.661238569" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.555123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.555545 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.055526345 +0000 UTC m=+147.184808957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.558480 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qsv8" podStartSLOduration=120.558456702 podStartE2EDuration="2m0.558456702s" podCreationTimestamp="2026-01-27 18:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.557243806 +0000 UTC m=+146.686526418" watchObservedRunningTime="2026-01-27 18:08:11.558456702 +0000 UTC m=+146.687739314" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.644469 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podStartSLOduration=121.644442736 podStartE2EDuration="2m1.644442736s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.604926277 +0000 UTC m=+146.734208889" watchObservedRunningTime="2026-01-27 18:08:11.644442736 +0000 UTC m=+146.773725348" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.644814 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-l4hv6" podStartSLOduration=121.644809997 podStartE2EDuration="2m1.644809997s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.64289638 +0000 UTC m=+146.772178992" watchObservedRunningTime="2026-01-27 18:08:11.644809997 +0000 UTC m=+146.774092609" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.657104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.657366 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.157327178 +0000 UTC m=+147.286609790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.657537 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.658037 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.158002118 +0000 UTC m=+147.287284730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669596 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669678 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669765 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.669810 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.672165 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" podStartSLOduration=121.672154167 podStartE2EDuration="2m1.672154167s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:11.671218309 +0000 UTC m=+146.800500921" watchObservedRunningTime="2026-01-27 18:08:11.672154167 +0000 UTC m=+146.801436779" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.674069 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.674130 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.675118 4907 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xld9m container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.675157 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podUID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.759053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.759346 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.259304956 +0000 UTC m=+147.388587568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.759438 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.759783 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.25976808 +0000 UTC m=+147.389050682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.854758 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:11 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:11 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:11 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.854884 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.860322 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.860654 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.360631936 +0000 UTC m=+147.489914548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.911568 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:08:11 crc kubenswrapper[4907]: I0127 18:08:11.962221 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:11 crc kubenswrapper[4907]: E0127 18:08:11.962670 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.462655796 +0000 UTC m=+147.591938408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.062948 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.063157 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.5631209 +0000 UTC m=+147.692403512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.064031 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.064451 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.564439489 +0000 UTC m=+147.693722091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.164742 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.165128 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.665109189 +0000 UTC m=+147.794391801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.265857 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.266240 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.766223872 +0000 UTC m=+147.895506484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.269871 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"31abee030aec864b0b7650934ff98ad10d37a1b2c79e40fe336210a1e258e2ff"} Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.271118 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pn59x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.271176 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.367620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.367863 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.867827059 +0000 UTC m=+147.997109671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.368807 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.369985 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.869969073 +0000 UTC m=+147.999251685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.470349 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.470701 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.970678284 +0000 UTC m=+148.099960896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.470815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.471241 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:12.97123269 +0000 UTC m=+148.100515302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.572680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.572825 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.072791347 +0000 UTC m=+148.202073959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.573389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.573864 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.073842418 +0000 UTC m=+148.203125030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.674935 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.675359 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.175303791 +0000 UTC m=+148.304586413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.776699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.777247 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.277228108 +0000 UTC m=+148.406510720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.853134 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:12 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:12 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:12 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.853225 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.877591 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.877950 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.377907448 +0000 UTC m=+148.507190200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:12 crc kubenswrapper[4907]: I0127 18:08:12.979825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:12 crc kubenswrapper[4907]: E0127 18:08:12.980371 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.480354451 +0000 UTC m=+148.609637063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.081089 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.081453 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.581369791 +0000 UTC m=+148.710652403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.081929 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.082542 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.582527315 +0000 UTC m=+148.711809928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.183256 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.183390 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.683365141 +0000 UTC m=+148.812647753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.183788 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.184378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.68434937 +0000 UTC m=+148.813632192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.273750 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.273816 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284136 4907 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-78q6j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284198 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" podUID="38363947-4768-44b8-b3fe-f7b5b482da55" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284576 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.284775 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.784746441 +0000 UTC m=+148.914029043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.284972 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.285334 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.785325899 +0000 UTC m=+148.914608511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.368006 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.369497 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.372344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.385782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.385975 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.885944457 +0000 UTC m=+149.015227069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.386415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.387542 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.887525704 +0000 UTC m=+149.016808316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.409791 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488317 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488811 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.488837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.489021 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:13.989002098 +0000 UTC m=+149.118284710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.531066 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.532050 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.535562 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.577611 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590187 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590230 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590284 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590302 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590323 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590438 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590501 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.590724 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.090700528 +0000 UTC m=+149.219983210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.590911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.597626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598340 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598405 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.598431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.676176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"community-operators-mhc2f\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.686087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693499 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.693629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.694079 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.194058668 +0000 UTC m=+149.323341280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.694447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.694998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.749401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"certified-operators-cg67x\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.766102 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.767263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.771147 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.776763 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.781460 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.796407 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.797051 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.797524 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.2975107 +0000 UTC m=+149.426793312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.856525 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.861722 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:13 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:13 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:13 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.861766 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898097 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.898383 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:13 crc kubenswrapper[4907]: E0127 18:08:13.898518 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.398496489 +0000 UTC m=+149.527779101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.944535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.945702 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.963182 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.999861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:13 crc kubenswrapper[4907]: I0127 18:08:13.999916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000028 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000063 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000124 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.000171 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.000915 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.500897441 +0000 UTC m=+149.630180053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.001020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.001331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.047472 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"community-operators-pnt7r\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104363 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104668 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.104708 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.105449 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.605424535 +0000 UTC m=+149.734707147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105502 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105542 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.105911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.106549 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.606541148 +0000 UTC m=+149.735823760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.143394 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.162290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"certified-operators-b7l4d\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.207236 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.207538 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.707517637 +0000 UTC m=+149.836800239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.302171 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.310198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.310530 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.810517366 +0000 UTC m=+149.939799978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.380515 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"4350a3f312577284841a8f2a177a6a61f1a418d239294a056a2d101d359c1912"} Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.414335 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.414645 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:14.914627247 +0000 UTC m=+150.043909859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.522281 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.522903 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.022888652 +0000 UTC m=+150.152171264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.636968 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.637288 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.137269288 +0000 UTC m=+150.266551900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.739926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.740605 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.240592516 +0000 UTC m=+150.369875128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.791391 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.841275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.841660 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.341639778 +0000 UTC m=+150.470922390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.862820 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:14 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:14 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:14 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.862890 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.945322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:14 crc kubenswrapper[4907]: E0127 18:08:14.947932 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.447917874 +0000 UTC m=+150.577200486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:14 crc kubenswrapper[4907]: I0127 18:08:14.958040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.052306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.052710 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.552687235 +0000 UTC m=+150.681969847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.059991 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.153960 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.154353 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.654334394 +0000 UTC m=+150.783616996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.174081 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.237538 4907 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.256917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.257779 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.757758075 +0000 UTC m=+150.887040687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.313188 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.314157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.322531 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.338898 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.358475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.358906 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.858886048 +0000 UTC m=+150.988168660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390318 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1f9660fac977eeb3fd0684c3b80326db6baf83e4a30ff5d1a5b7689f5055ecd3"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390377 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e5bdf425241c511c1277d134a95cf0283b70b6d80213a155057fb3a3199e42bf"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.390655 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.391083 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"381af3184b48628759e0e418b748e32d55bc4e48955c79f0bca42f10d1b84973"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.393599 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f115eecb9745fb2732177f347a753c0aacf9bb77b615bc1b7a84858c390e9ff"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.393633 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"777607e0327b4a1fd82b38b0638fe07f446796e4a20a050acce461f335a96b4b"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.395361 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9c3eeb2bc4e6c3388483c1a27d99aa60ac152eab3a7f905736d0bd08c7b87a40"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.395393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6662102d52a1ba43e602baddaa718a19dc1e21baabef154413168f5ad25c85a1"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.397706 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"df5c19582e26f170aa0c1548da26d952491d3edd29cac3e74949a7d252495505"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399624 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.399718 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"011be499d8b8d8d22772e72b71e952b3184b41de73c3cfac7cf3219b4b7d08b2"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403278 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerStarted","Data":"327477b6362453b7f241bd4005f967f63bfcd92d60574b597325042d23e6ed02"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.403397 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.408008 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.409901 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.411864 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerStarted","Data":"d3c6770c98ff6f027f6fcb7e8ca70d2af0a26ac35823e4ed05e57a35a4dcfa76"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.422535 4907 generic.go:334] "Generic (PLEG): container finished" podID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerID="52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79" exitCode=0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.422605 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerDied","Data":"52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79"} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459138 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459423 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459494 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.459515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.459668 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:15.959650571 +0000 UTC m=+151.088933173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560682 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560798 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.560821 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.561109 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.061096794 +0000 UTC m=+151.190379406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.561659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.561910 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.591605 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"redhat-marketplace-klwtz\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.661783 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.662852 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.162830115 +0000 UTC m=+151.292112727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.711194 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.712480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.719314 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763921 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.763976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.764344 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.26432755 +0000 UTC m=+151.393610162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wwg9f" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.820712 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.853722 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:15 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:15 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:15 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.853792 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.864997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.865521 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: E0127 18:08:15.865643 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:08:16.365622398 +0000 UTC m=+151.494905010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.866106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.885261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78q6j" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.894121 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"redhat-marketplace-glcgf\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.940985 4907 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T18:08:15.237593018Z","Handler":null,"Name":""} Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.955990 4907 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.956048 4907 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.965760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.982055 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:08:15 crc kubenswrapper[4907]: I0127 18:08:15.982100 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.018714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wwg9f\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.044299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.052679 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:08:16 crc kubenswrapper[4907]: W0127 18:08:16.064767 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee6d631_48d1_4137_9736_c028fb27e655.slice/crio-9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9 WatchSource:0}: Error finding container 9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9: Status 404 returned error can't find the container with id 9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.066336 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.072229 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.240707 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.252709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:08:16 crc kubenswrapper[4907]: W0127 18:08:16.315594 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf WatchSource:0}: Error finding container 6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf: Status 404 returned error can't find the container with id 6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.433290 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.439679 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" exitCode=0 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.439747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500479 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718" exitCode=0 Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500630 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.500731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerStarted","Data":"9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.525056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" event={"ID":"5f465d65-342c-410f-9374-d8c5ac6f03e0","Type":"ContainerStarted","Data":"b7b18eda64aebb23e9943e310d6bef774c0c9293904973ce32213bb07072d47f"} Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.573326 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" podStartSLOduration=12.573284176 podStartE2EDuration="12.573284176s" podCreationTimestamp="2026-01-27 18:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:16.573275856 +0000 UTC m=+151.702558468" watchObservedRunningTime="2026-01-27 18:08:16.573284176 +0000 UTC m=+151.702566788" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633622 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633687 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633716 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.633737 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.638800 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.640386 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.643169 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.643408 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.646761 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.656188 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.681053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.681102 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.690323 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.692419 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]log ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]etcd ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/max-in-flight-filter ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 18:08:16 crc kubenswrapper[4907]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/openshift.io-startinformers ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 18:08:16 crc kubenswrapper[4907]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 18:08:16 crc kubenswrapper[4907]: livez check failed Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.692492 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.697239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.726905 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.728866 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.739071 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.740466 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.745966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:08:16 crc kubenswrapper[4907]: E0127 18:08:16.755363 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-conmon-41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded699310_2f9f_414f_ad04_7778af36ddb7.slice/crio-41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782923 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.782991 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783181 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.783281 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.815651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.815904 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.819246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.822167 4907 patch_prober.go:28] interesting pod/console-f9d7485db-grwdr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.822223 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-grwdr" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.853290 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.858889 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:16 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:16 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.858959 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.871942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.885949 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.886368 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.887636 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.944544 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"redhat-operators-jhwph\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:16 crc kubenswrapper[4907]: I0127 18:08:16.998717 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089303 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.089378 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") pod \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\" (UID: \"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf\") " Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.098732 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4" (OuterVolumeSpecName: "kube-api-access-l6wm4") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "kube-api-access-l6wm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.099491 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.103645 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" (UID: "ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.115844 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.116710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: E0127 18:08:17.116988 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117000 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117120 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" containerName="collect-profiles" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.117981 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.135874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.157043 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192021 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192086 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192109 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192167 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wm4\" (UniqueName: \"kubernetes.io/projected/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-kube-api-access-l6wm4\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192179 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.192189 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.295218 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.298951 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.299745 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.302629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.304702 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.306062 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.306267 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.312680 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.344908 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"redhat-operators-m4mfc\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.396114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.396166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.441082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.447868 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498346 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498400 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.498467 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.511487 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.516211 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9526ea_3ca9_4727_aadd_3103419511d9.slice/crio-a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a WatchSource:0}: Error finding container a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a: Status 404 returned error can't find the container with id a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.522346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerStarted","Data":"7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539861 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerStarted","Data":"c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.539955 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" event={"ID":"ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf","Type":"ContainerDied","Data":"048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549270 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.549280 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048e1dff87736aab4fe9377cb7178695cb1717c58a7071500e8672995ab0ccd7" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.550355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.551845 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" exitCode=0 Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.552950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13"} Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.569621 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" podStartSLOduration=127.569596388 podStartE2EDuration="2m7.569596388s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:17.565769585 +0000 UTC m=+152.695052217" watchObservedRunningTime="2026-01-27 18:08:17.569596388 +0000 UTC m=+152.698879000" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.590573 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.635639 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.683600 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.707319 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.719787 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e033ab3_25a2_4b59_80a5_a9af38d07e93.slice/crio-30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1 WatchSource:0}: Error finding container 30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1: Status 404 returned error can't find the container with id 30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1 Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.781623 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.865956 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:17 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:17 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:17 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.866082 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:17 crc kubenswrapper[4907]: I0127 18:08:17.979659 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:08:17 crc kubenswrapper[4907]: W0127 18:08:17.992309 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c7f489_c85f_47d4_9ef7_d0f9aba0cc19.slice/crio-b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522 WatchSource:0}: Error finding container b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522: Status 404 returned error can't find the container with id b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.098070 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.596415 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062" exitCode=0 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.596478 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.642647 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerStarted","Data":"c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.644403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerStarted","Data":"30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.646914 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb" exitCode=0 Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.646995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.647125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522"} Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.853549 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:18 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:18 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:18 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.853624 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:18 crc kubenswrapper[4907]: I0127 18:08:18.889252 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.411575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4tcrf" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.798591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerStarted","Data":"2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5"} Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.804385 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerStarted","Data":"f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656"} Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.814315 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.814297334 podStartE2EDuration="3.814297334s" podCreationTimestamp="2026-01-27 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:19.813288314 +0000 UTC m=+154.942570926" watchObservedRunningTime="2026-01-27 18:08:19.814297334 +0000 UTC m=+154.943579946" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.832654 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.832634196 podStartE2EDuration="2.832634196s" podCreationTimestamp="2026-01-27 18:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:19.830526634 +0000 UTC m=+154.959809266" watchObservedRunningTime="2026-01-27 18:08:19.832634196 +0000 UTC m=+154.961916808" Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.851170 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:19 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:19 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:19 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:19 crc kubenswrapper[4907]: I0127 18:08:19.851218 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.815869 4907 generic.go:334] "Generic (PLEG): container finished" podID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerID="2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5" exitCode=0 Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.815913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerDied","Data":"2090073edde9c9116430ed919aa591dc52f9fc6d85981d69423df39d08b0aca5"} Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.819468 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerID="f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656" exitCode=0 Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.819504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerDied","Data":"f1de975e142966b525c5e1a58e3fe61084421fc58711740060b3a5b538fba656"} Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.852378 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:20 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:20 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:20 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:20 crc kubenswrapper[4907]: I0127 18:08:20.852472 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.683469 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.694901 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.852530 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:21 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:21 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:21 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:21 crc kubenswrapper[4907]: I0127 18:08:21.852871 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.268497 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302109 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") pod \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") pod \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\" (UID: \"8e033ab3-25a2-4b59-80a5-a9af38d07e93\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302305 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e033ab3-25a2-4b59-80a5-a9af38d07e93" (UID: "8e033ab3-25a2-4b59-80a5-a9af38d07e93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.302510 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.309975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e033ab3-25a2-4b59-80a5-a9af38d07e93" (UID: "8e033ab3-25a2-4b59-80a5-a9af38d07e93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.351860 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.403291 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e033ab3-25a2-4b59-80a5-a9af38d07e93-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.504248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") pod \"3d614d08-52fd-42af-a4bd-b17d80303a0d\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.504355 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") pod \"3d614d08-52fd-42af-a4bd-b17d80303a0d\" (UID: \"3d614d08-52fd-42af-a4bd-b17d80303a0d\") " Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.505348 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d614d08-52fd-42af-a4bd-b17d80303a0d" (UID: "3d614d08-52fd-42af-a4bd-b17d80303a0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.508776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d614d08-52fd-42af-a4bd-b17d80303a0d" (UID: "3d614d08-52fd-42af-a4bd-b17d80303a0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.605797 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d614d08-52fd-42af-a4bd-b17d80303a0d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.605829 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d614d08-52fd-42af-a4bd-b17d80303a0d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.848686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e033ab3-25a2-4b59-80a5-a9af38d07e93","Type":"ContainerDied","Data":"30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1"} Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.849052 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c3094c36336ffc2d466b954b58cf1289ac122602fa1b45eefb81bf89522be1" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.848726 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3d614d08-52fd-42af-a4bd-b17d80303a0d","Type":"ContainerDied","Data":"c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6"} Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851219 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c021a49533f9d2b9b5d2292cf1255b5fe4ad3bb1c0de4267e06e3872cd5c71f6" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.851415 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.852954 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:22 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:22 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:22 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:22 crc kubenswrapper[4907]: I0127 18:08:22.852991 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:23 crc kubenswrapper[4907]: I0127 18:08:23.852348 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:23 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:23 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:23 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:23 crc kubenswrapper[4907]: I0127 18:08:23.852409 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:24 crc kubenswrapper[4907]: I0127 18:08:24.852655 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:24 crc kubenswrapper[4907]: [-]has-synced failed: reason withheld Jan 27 18:08:24 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:24 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:24 crc kubenswrapper[4907]: I0127 18:08:24.853181 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:25 crc kubenswrapper[4907]: I0127 18:08:25.854180 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:08:25 crc kubenswrapper[4907]: [+]has-synced ok Jan 27 18:08:25 crc kubenswrapper[4907]: [+]process-running ok Jan 27 18:08:25 crc kubenswrapper[4907]: healthz check failed Jan 27 18:08:25 crc kubenswrapper[4907]: I0127 18:08:25.854266 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.521083 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.521412 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634128 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634216 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634140 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.634789 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.821468 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.824864 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.853450 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:26 crc kubenswrapper[4907]: I0127 18:08:26.855910 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.061004 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.061925 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" containerID="cri-o://ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" gracePeriod=30 Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.089918 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.090182 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" containerID="cri-o://5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" gracePeriod=30 Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.398846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.418047 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eeaae2ee-c57b-4323-9d3c-563d87d85f08-metrics-certs\") pod \"network-metrics-daemon-n2z5k\" (UID: \"eeaae2ee-c57b-4323-9d3c-563d87d85f08\") " pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:33 crc kubenswrapper[4907]: I0127 18:08:33.589937 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2z5k" Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.975968 4907 generic.go:334] "Generic (PLEG): container finished" podID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerID="5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" exitCode=0 Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.976131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerDied","Data":"5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c"} Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.982664 4907 generic.go:334] "Generic (PLEG): container finished" podID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerID="ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" exitCode=0 Jan 27 18:08:34 crc kubenswrapper[4907]: I0127 18:08:34.982717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerDied","Data":"ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb"} Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.250524 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634165 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634223 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634317 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634404 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.634476 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635057 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635231 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635395 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} pod="openshift-console/downloads-7954f5f757-h79fx" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.635585 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" containerID="cri-o://4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541" gracePeriod=2 Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.706845 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.707244 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.978640 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 18:08:36 crc kubenswrapper[4907]: I0127 18:08:36.978724 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.661993 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.662541 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdt5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pnt7r_openshift-marketplace(2d1e3321-a7c6-4910-adec-31bf7b3c8f0a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:40 crc kubenswrapper[4907]: E0127 18:08:40.663810 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.642880 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.643138 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhc2f_openshift-marketplace(7c7f1204-674f-4d4e-a695-28b2d0956b32): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:44 crc kubenswrapper[4907]: E0127 18:08:44.644427 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" Jan 27 18:08:45 crc kubenswrapper[4907]: E0127 18:08:45.504779 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" Jan 27 18:08:45 crc kubenswrapper[4907]: E0127 18:08:45.504888 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" Jan 27 18:08:46 crc kubenswrapper[4907]: I0127 18:08:46.633887 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:46 crc kubenswrapper[4907]: I0127 18:08:46.634285 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.055393 4907 generic.go:334] "Generic (PLEG): container finished" podID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerID="4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541" exitCode=0 Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.055485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerDied","Data":"4a5bfa6da2878ef843f8deef09696a24a46a75213b727c839a22dd03f7364541"} Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.631047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.706598 4907 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9j78b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.706697 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.978719 4907 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7mcmq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:08:47 crc kubenswrapper[4907]: I0127 18:08:47.978843 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.646653 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.646844 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frltg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-klwtz_openshift-marketplace(dee6d631-48d1-4137-9736-c028fb27e655): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:49 crc kubenswrapper[4907]: E0127 18:08:49.648135 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.156086 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.215051 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.220518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.243353 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.243522 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mthtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cg67x_openshift-marketplace(7ee8faea-87ec-4620-b6a8-db398d35039a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.245766 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250034 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250211 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wsq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-glcgf_openshift-marketplace(ed699310-2f9f-414f-ad04-7778af36ddb7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250284 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250360 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dz68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b7l4d_openshift-marketplace(f317b8ef-4875-4f24-8926-8efd5826a51e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250463 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250793 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250835 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250845 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250856 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250863 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.250875 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.250882 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251009 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d614d08-52fd-42af-a4bd-b17d80303a0d" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251021 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" containerName="route-controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251033 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e033ab3-25a2-4b59-80a5-a9af38d07e93" containerName="pruner" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.251046 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" containerName="controller-manager" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.251467 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" Jan 27 18:08:51 crc kubenswrapper[4907]: E0127 18:08:51.251480 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.252259 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.274787 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277100 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277155 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277198 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277215 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277276 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277324 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") pod \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\" (UID: \"248aff8a-60f5-4154-a7bb-2dd95e4b2555\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") pod \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\" (UID: \"98f518f9-4f3f-45f1-80f4-b50d4eb03135\") " Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277450 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277478 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.277507 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281444 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config" (OuterVolumeSpecName: "config") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281463 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.281447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config" (OuterVolumeSpecName: "config") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.282097 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca" (OuterVolumeSpecName: "client-ca") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.282225 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca" (OuterVolumeSpecName: "client-ca") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.285805 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7" (OuterVolumeSpecName: "kube-api-access-jzft7") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "kube-api-access-jzft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.286251 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697" (OuterVolumeSpecName: "kube-api-access-4n697") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "kube-api-access-4n697". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.287052 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98f518f9-4f3f-45f1-80f4-b50d4eb03135" (UID: "98f518f9-4f3f-45f1-80f4-b50d4eb03135"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.287106 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "248aff8a-60f5-4154-a7bb-2dd95e4b2555" (UID: "248aff8a-60f5-4154-a7bb-2dd95e4b2555"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379002 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379247 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379260 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f518f9-4f3f-45f1-80f4-b50d4eb03135-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379271 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379280 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379292 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n697\" (UniqueName: \"kubernetes.io/projected/248aff8a-60f5-4154-a7bb-2dd95e4b2555-kube-api-access-4n697\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379306 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/248aff8a-60f5-4154-a7bb-2dd95e4b2555-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379315 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzft7\" (UniqueName: \"kubernetes.io/projected/98f518f9-4f3f-45f1-80f4-b50d4eb03135-kube-api-access-jzft7\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379326 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/248aff8a-60f5-4154-a7bb-2dd95e4b2555-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.379334 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f518f9-4f3f-45f1-80f4-b50d4eb03135-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.380270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.380346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.383240 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.397885 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"route-controller-manager-58bf9f9fdf-l77zq\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:51 crc kubenswrapper[4907]: I0127 18:08:51.576638 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087648 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" event={"ID":"98f518f9-4f3f-45f1-80f4-b50d4eb03135","Type":"ContainerDied","Data":"ab8e4f87a2ffbe236cf7a8b9faa6044f734bcd783a8ccf483230babd8b2d0aab"} Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087707 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.087723 4907 scope.go:117] "RemoveContainer" containerID="5ced0bde139ab2fd5b1681b92757a3dcb9a399cb22cb1c3725107cf2b29c751c" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.091220 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" event={"ID":"248aff8a-60f5-4154-a7bb-2dd95e4b2555","Type":"ContainerDied","Data":"53f1bb78246a95f04a0e3a59320d7de5b66a380634a406b2deaad462424ff23c"} Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.091418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9j78b" Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.167332 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.169349 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7mcmq"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.174829 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:52 crc kubenswrapper[4907]: I0127 18:08:52.177279 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9j78b"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.039959 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.040893 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.046624 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.046929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.047970 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048377 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048393 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.048764 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.050929 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.054947 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.079785 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.080674 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.085470 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.085847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.096044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105257 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105321 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105456 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.105475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.117899 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.207937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208282 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208306 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208339 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208383 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.208405 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.209272 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210014 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210350 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.210382 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.215253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.238138 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.238331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"controller-manager-68686db9cf-mskkh\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.355487 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.394979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.753947 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248aff8a-60f5-4154-a7bb-2dd95e4b2555" path="/var/lib/kubelet/pods/248aff8a-60f5-4154-a7bb-2dd95e4b2555/volumes" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.754711 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f518f9-4f3f-45f1-80f4-b50d4eb03135" path="/var/lib/kubelet/pods/98f518f9-4f3f-45f1-80f4-b50d4eb03135/volumes" Jan 27 18:08:53 crc kubenswrapper[4907]: I0127 18:08:53.787274 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.292923 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.293081 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.293131 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.327116 4907 scope.go:117] "RemoveContainer" containerID="ee3086e80dac48aa23ee0d75f12dbfe9110dca0a98e225e946f66043c47461fb" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.363528 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.364084 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9khpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jhwph_openshift-marketplace(1f9526ea-3ca9-4727-aadd-3103419511d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:08:55 crc kubenswrapper[4907]: E0127 18:08:55.366156 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.550689 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2z5k"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.623047 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.884545 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:55 crc kubenswrapper[4907]: I0127 18:08:55.899192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:08:55 crc kubenswrapper[4907]: W0127 18:08:55.956611 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce1035d6_a135_49c3_8d47_48005ecfc2d7.slice/crio-6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a WatchSource:0}: Error finding container 6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a: Status 404 returned error can't find the container with id 6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.127448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"cf780a670ea7e26bae79f6bfd9a8b3fe16142cfe305261656badb4b95b14c50a"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.129017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerStarted","Data":"df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.131641 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerStarted","Data":"6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a"} Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.136687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerStarted","Data":"c20e1125c69187397fd3d90a194c9f5e0105aa13b720cd7fbb8a8dbc1ba84399"} Jan 27 18:08:56 crc kubenswrapper[4907]: E0127 18:08:56.138452 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.521892 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.521983 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.639976 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:56 crc kubenswrapper[4907]: I0127 18:08:56.640058 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.220119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerStarted","Data":"24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.228800 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" containerID="cri-o://24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" gracePeriod=30 Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.229434 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.253349 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.257890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h79fx" event={"ID":"c8a31b60-14c7-4b73-a17f-60d101c0119b","Type":"ContainerStarted","Data":"6629b1b5c77c6cdbd075185f02e2e8dccc29f1ed5e33db884228f2ec0a4dd7c2"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.258953 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.259026 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.259056 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.281279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerStarted","Data":"4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.282635 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.305692 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" podStartSLOduration=24.305670255 podStartE2EDuration="24.305670255s" podCreationTimestamp="2026-01-27 18:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.262104976 +0000 UTC m=+192.391387588" watchObservedRunningTime="2026-01-27 18:08:57.305670255 +0000 UTC m=+192.434952867" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.308922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"bba4db02fed7b83ec087a05e847f09908fd62f4840c5fe753d719bb1bf8eea4b"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.323700 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.329775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerStarted","Data":"68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32"} Jan 27 18:08:57 crc kubenswrapper[4907]: I0127 18:08:57.379465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.379444919 podStartE2EDuration="4.379444919s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.377812871 +0000 UTC m=+192.507095483" watchObservedRunningTime="2026-01-27 18:08:57.379444919 +0000 UTC m=+192.508727531" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.339988 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2z5k" event={"ID":"eeaae2ee-c57b-4323-9d3c-563d87d85f08","Type":"ContainerStarted","Data":"1c34b09cf0f7dbe39eb22af74758ea5e58fecc5910cfc23955e5c5c47e84b08b"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.342490 4907 generic.go:334] "Generic (PLEG): container finished" podID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerID="68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32" exitCode=0 Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.342579 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerDied","Data":"68356953970f6d3642ef7e879a454bf3b0981704cd996f12254b0f4421aace32"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.345977 4907 generic.go:334] "Generic (PLEG): container finished" podID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerID="24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" exitCode=0 Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346033 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerDied","Data":"24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce"} Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346492 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.346548 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.354601 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n2z5k" podStartSLOduration=168.354579214 podStartE2EDuration="2m48.354579214s" podCreationTimestamp="2026-01-27 18:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:58.354060538 +0000 UTC m=+193.483343150" watchObservedRunningTime="2026-01-27 18:08:58.354579214 +0000 UTC m=+193.483861846" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.355242 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podStartSLOduration=5.355234203 podStartE2EDuration="5.355234203s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:08:57.408027965 +0000 UTC m=+192.537310577" watchObservedRunningTime="2026-01-27 18:08:58.355234203 +0000 UTC m=+193.484516835" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.424645 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456512 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:08:58 crc kubenswrapper[4907]: E0127 18:08:58.456756 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456767 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.456863 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" containerName="route-controller-manager" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.457232 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.506798 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624627 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624719 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.624767 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") pod \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\" (UID: \"ce1035d6-a135-49c3-8d47-48005ecfc2d7\") " Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625372 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625459 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.625769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config" (OuterVolumeSpecName: "config") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.626085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.632498 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm" (OuterVolumeSpecName: "kube-api-access-mmxrm") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "kube-api-access-mmxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.645743 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce1035d6-a135-49c3-8d47-48005ecfc2d7" (UID: "ce1035d6-a135-49c3-8d47-48005ecfc2d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726230 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726420 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce1035d6-a135-49c3-8d47-48005ecfc2d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726433 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmxrm\" (UniqueName: \"kubernetes.io/projected/ce1035d6-a135-49c3-8d47-48005ecfc2d7-kube-api-access-mmxrm\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726444 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.726452 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce1035d6-a135-49c3-8d47-48005ecfc2d7-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.727481 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.727915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.730197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.750127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"route-controller-manager-679d4f688f-zzf7p\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:58 crc kubenswrapper[4907]: I0127 18:08:58.769102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.354635 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9"} Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" event={"ID":"ce1035d6-a135-49c3-8d47-48005ecfc2d7","Type":"ContainerDied","Data":"6c4c87cd6630af39454fb9f935cdfba0320367bb644655464249899b5644e59a"} Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357358 4907 scope.go:117] "RemoveContainer" containerID="24375466c4447104ecfe8d1ff0cda5957a80357ca1bcacb2835a1e6d752701ce" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.357474 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.358386 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.358475 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.440868 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.450218 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-l77zq"] Jan 27 18:08:59 crc kubenswrapper[4907]: I0127 18:08:59.761352 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1035d6-a135-49c3-8d47-48005ecfc2d7" path="/var/lib/kubelet/pods/ce1035d6-a135-49c3-8d47-48005ecfc2d7/volumes" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.277040 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.283248 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.300934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.364246 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9" exitCode=0 Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.364293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9"} Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454599 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454701 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.454728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.555940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556014 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556900 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.556998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.577952 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"installer-9-crc\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.606337 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.751945 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.862635 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") pod \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.862734 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") pod \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\" (UID: \"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09\") " Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.863052 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" (UID: "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.866728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" (UID: "cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.964722 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:00 crc kubenswrapper[4907]: I0127 18:09:00.964758 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.370973 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09","Type":"ContainerDied","Data":"df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced"} Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.371049 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df45217b3decf5237b881f284a2825c844cbaf1f22a682d4bbd16bdc08117ced" Jan 27 18:09:01 crc kubenswrapper[4907]: I0127 18:09:01.371071 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:09:03 crc kubenswrapper[4907]: I0127 18:09:03.441938 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:03 crc kubenswrapper[4907]: I0127 18:09:03.599709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.392773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerStarted","Data":"c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.393995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerStarted","Data":"ef0720fe87e3f12e59dc7091d752de57214cbed73b14c8a59f3d4b6af6479d52"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.395821 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.398100 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646" exitCode=0 Jan 27 18:09:04 crc kubenswrapper[4907]: I0127 18:09:04.398167 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.405950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerStarted","Data":"84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.408712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerStarted","Data":"af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.410276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.417076 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" exitCode=0 Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.417150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.422202 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.438538 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.438511712 podStartE2EDuration="5.438511712s" podCreationTimestamp="2026-01-27 18:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:05.429360144 +0000 UTC m=+200.558642766" watchObservedRunningTime="2026-01-27 18:09:05.438511712 +0000 UTC m=+200.567794364" Jan 27 18:09:05 crc kubenswrapper[4907]: I0127 18:09:05.516720 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" podStartSLOduration=12.51669122 podStartE2EDuration="12.51669122s" podCreationTimestamp="2026-01-27 18:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:05.511957652 +0000 UTC m=+200.641240294" watchObservedRunningTime="2026-01-27 18:09:05.51669122 +0000 UTC m=+200.645973852" Jan 27 18:09:06 crc kubenswrapper[4907]: I0127 18:09:06.653965 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-h79fx" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.437232 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerStarted","Data":"c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581"} Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.448917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.448981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:07 crc kubenswrapper[4907]: I0127 18:09:07.788342 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m4mfc" podStartSLOduration=3.090707191 podStartE2EDuration="50.788315212s" podCreationTimestamp="2026-01-27 18:08:17 +0000 UTC" firstStartedPulling="2026-01-27 18:08:18.650789713 +0000 UTC m=+153.780072325" lastFinishedPulling="2026-01-27 18:09:06.348397704 +0000 UTC m=+201.477680346" observedRunningTime="2026-01-27 18:09:07.457066026 +0000 UTC m=+202.586348708" watchObservedRunningTime="2026-01-27 18:09:07.788315212 +0000 UTC m=+202.917597854" Jan 27 18:09:09 crc kubenswrapper[4907]: I0127 18:09:09.416310 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m4mfc" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" probeResult="failure" output=< Jan 27 18:09:09 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:09:09 crc kubenswrapper[4907]: > Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.489573 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.505984 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35" exitCode=0 Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.506054 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.527222 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerStarted","Data":"51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.537125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.540300 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.543771 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8" exitCode=0 Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.543842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.547908 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.550183 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pnt7r" podStartSLOduration=3.662961028 podStartE2EDuration="1m4.550161393s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.413733842 +0000 UTC m=+150.543016454" lastFinishedPulling="2026-01-27 18:09:16.300934207 +0000 UTC m=+211.430216819" observedRunningTime="2026-01-27 18:09:17.546439194 +0000 UTC m=+212.675721806" watchObservedRunningTime="2026-01-27 18:09:17.550161393 +0000 UTC m=+212.679444005" Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.550909 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerStarted","Data":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} Jan 27 18:09:17 crc kubenswrapper[4907]: I0127 18:09:17.631459 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhc2f" podStartSLOduration=3.738040472 podStartE2EDuration="1m4.631435652s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.402921102 +0000 UTC m=+150.532203714" lastFinishedPulling="2026-01-27 18:09:16.296316252 +0000 UTC m=+211.425598894" observedRunningTime="2026-01-27 18:09:17.630020631 +0000 UTC m=+212.759303243" watchObservedRunningTime="2026-01-27 18:09:17.631435652 +0000 UTC m=+212.760718264" Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.559154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e"} Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.560979 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" exitCode=0 Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.561009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.563281 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" exitCode=0 Jan 27 18:09:18 crc kubenswrapper[4907]: I0127 18:09:18.563319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} Jan 27 18:09:19 crc kubenswrapper[4907]: I0127 18:09:19.571266 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e" exitCode=0 Jan 27 18:09:19 crc kubenswrapper[4907]: I0127 18:09:19.571316 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e"} Jan 27 18:09:21 crc kubenswrapper[4907]: I0127 18:09:21.941494 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:21 crc kubenswrapper[4907]: I0127 18:09:21.942198 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m4mfc" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" containerID="cri-o://c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" gracePeriod=2 Jan 27 18:09:22 crc kubenswrapper[4907]: I0127 18:09:22.590161 4907 generic.go:334] "Generic (PLEG): container finished" podID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerID="c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" exitCode=0 Jan 27 18:09:22 crc kubenswrapper[4907]: I0127 18:09:22.590247 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581"} Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.599101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerStarted","Data":"cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4"} Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.687095 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.687176 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.744400 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.876291 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.915881 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.915985 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.916022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") pod \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\" (UID: \"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19\") " Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.919225 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities" (OuterVolumeSpecName: "utilities") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:23 crc kubenswrapper[4907]: I0127 18:09:23.923331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x" (OuterVolumeSpecName: "kube-api-access-nth5x") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "kube-api-access-nth5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.017219 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nth5x\" (UniqueName: \"kubernetes.io/projected/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-kube-api-access-nth5x\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.017267 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.059091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" (UID: "e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.118646 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.144934 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.145015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.186053 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611272 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4mfc" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4mfc" event={"ID":"e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19","Type":"ContainerDied","Data":"b4e201da5789daaf413f87f58fcb846077da1151d2a01d4d561aec46a6d0a522"} Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.611839 4907 scope.go:117] "RemoveContainer" containerID="c186493fcdfc5fde9cc33fc5aaa9510c70d27c3fcd895a6941ef5faeb8eb1581" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.638995 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cg67x" podStartSLOduration=4.9349759760000005 podStartE2EDuration="1m11.638976684s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:15.405429526 +0000 UTC m=+150.534712128" lastFinishedPulling="2026-01-27 18:09:22.109430224 +0000 UTC m=+217.238712836" observedRunningTime="2026-01-27 18:09:24.637574833 +0000 UTC m=+219.766857445" watchObservedRunningTime="2026-01-27 18:09:24.638976684 +0000 UTC m=+219.768259296" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.652161 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.661729 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m4mfc"] Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.661847 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:09:24 crc kubenswrapper[4907]: I0127 18:09:24.690066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:25 crc kubenswrapper[4907]: I0127 18:09:25.756776 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" path="/var/lib/kubelet/pods/e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19/volumes" Jan 27 18:09:25 crc kubenswrapper[4907]: I0127 18:09:25.999789 4907 scope.go:117] "RemoveContainer" containerID="69eb07d07e6a81384b1d0a6fb61a98baac37c5adcc43d4ee93e0d722ae9739f9" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.019806 4907 scope.go:117] "RemoveContainer" containerID="38285b14f93c22653ebbde6f30cf34ab1bec2a2df662e6ed5f2ede4a2203a9bb" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521257 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521333 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.521391 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.522117 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.522197 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" gracePeriod=600 Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.945278 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:26 crc kubenswrapper[4907]: I0127 18:09:26.947288 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pnt7r" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" containerID="cri-o://51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" gracePeriod=2 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.635906 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" exitCode=0 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.635972 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.637882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerStarted","Data":"ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.640773 4907 generic.go:334] "Generic (PLEG): container finished" podID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerID="51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" exitCode=0 Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.640801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641"} Jan 27 18:09:28 crc kubenswrapper[4907]: I0127 18:09:28.660513 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-klwtz" podStartSLOduration=4.170619449 podStartE2EDuration="1m13.660494455s" podCreationTimestamp="2026-01-27 18:08:15 +0000 UTC" firstStartedPulling="2026-01-27 18:08:16.510121066 +0000 UTC m=+151.639403678" lastFinishedPulling="2026-01-27 18:09:25.999996072 +0000 UTC m=+221.129278684" observedRunningTime="2026-01-27 18:09:28.659195687 +0000 UTC m=+223.788478309" watchObservedRunningTime="2026-01-27 18:09:28.660494455 +0000 UTC m=+223.789777067" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.154855 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.308913 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.309312 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.309339 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") pod \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\" (UID: \"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a\") " Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.310295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities" (OuterVolumeSpecName: "utilities") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.319116 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q" (OuterVolumeSpecName: "kube-api-access-gdt5q") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "kube-api-access-gdt5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.367292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" (UID: "2d1e3321-a7c6-4910-adec-31bf7b3c8f0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419083 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdt5q\" (UniqueName: \"kubernetes.io/projected/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-kube-api-access-gdt5q\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419150 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.419164 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651898 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnt7r" event={"ID":"2d1e3321-a7c6-4910-adec-31bf7b3c8f0a","Type":"ContainerDied","Data":"d3c6770c98ff6f027f6fcb7e8ca70d2af0a26ac35823e4ed05e57a35a4dcfa76"} Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651973 4907 scope.go:117] "RemoveContainer" containerID="51dc4f5416873859523ad79a85bd45e3b83feb84c4b4c53fc426f1c4e5109641" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.651993 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnt7r" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.667845 4907 scope.go:117] "RemoveContainer" containerID="c46d6df9d82d4f4cc7de32f448a7f920700e22083478d02639b39cd9ec76b646" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.687195 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.692439 4907 scope.go:117] "RemoveContainer" containerID="f78417433897aedb0b02b3af7c2f2b881e06ca35f9f9655a9f750f3ff4783dfe" Jan 27 18:09:30 crc kubenswrapper[4907]: I0127 18:09:30.697728 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pnt7r"] Jan 27 18:09:31 crc kubenswrapper[4907]: I0127 18:09:31.665035 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} Jan 27 18:09:31 crc kubenswrapper[4907]: I0127 18:09:31.758800 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" path="/var/lib/kubelet/pods/2d1e3321-a7c6-4910-adec-31bf7b3c8f0a/volumes" Jan 27 18:09:32 crc kubenswrapper[4907]: I0127 18:09:32.672628 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerStarted","Data":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027444 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b7l4d" podStartSLOduration=6.388819767 podStartE2EDuration="1m20.027421306s" podCreationTimestamp="2026-01-27 18:08:13 +0000 UTC" firstStartedPulling="2026-01-27 18:08:16.483486088 +0000 UTC m=+151.612768700" lastFinishedPulling="2026-01-27 18:09:30.122087627 +0000 UTC m=+225.251370239" observedRunningTime="2026-01-27 18:09:32.719799002 +0000 UTC m=+227.849081614" watchObservedRunningTime="2026-01-27 18:09:33.027421306 +0000 UTC m=+228.156703918" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027673 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.027899 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" containerID="cri-o://4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" gracePeriod=30 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.118789 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.119294 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" containerID="cri-o://af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" gracePeriod=30 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.356681 4907 patch_prober.go:28] interesting pod/controller-manager-68686db9cf-mskkh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.357119 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.680110 4907 generic.go:334] "Generic (PLEG): container finished" podID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerID="4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" exitCode=0 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.680179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerDied","Data":"4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.682523 4907 generic.go:334] "Generic (PLEG): container finished" podID="090a67dd-469f-44de-9760-bb58338594d7" containerID="af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" exitCode=0 Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.682773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerDied","Data":"af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143"} Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.857756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.858250 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:33 crc kubenswrapper[4907]: I0127 18:09:33.907452 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.010643 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.076993 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077119 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077187 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") pod \"02a0c38d-5a4e-4189-86b8-6a42930553a2\" (UID: \"02a0c38d-5a4e-4189-86b8-6a42930553a2\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.077990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.078380 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.078450 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config" (OuterVolumeSpecName: "config") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.084224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb" (OuterVolumeSpecName: "kube-api-access-pnlnb") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "kube-api-access-pnlnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.091168 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02a0c38d-5a4e-4189-86b8-6a42930553a2" (UID: "02a0c38d-5a4e-4189-86b8-6a42930553a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179057 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a0c38d-5a4e-4189-86b8-6a42930553a2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179373 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179382 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179392 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlnb\" (UniqueName: \"kubernetes.io/projected/02a0c38d-5a4e-4189-86b8-6a42930553a2-kube-api-access-pnlnb\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.179403 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a0c38d-5a4e-4189-86b8-6a42930553a2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.295064 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.303029 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.303077 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.349321 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381485 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381583 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.381686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") pod \"090a67dd-469f-44de-9760-bb58338594d7\" (UID: \"090a67dd-469f-44de-9760-bb58338594d7\") " Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.382568 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.382584 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config" (OuterVolumeSpecName: "config") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.387710 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz" (OuterVolumeSpecName: "kube-api-access-zjktz") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "kube-api-access-zjktz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.387790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "090a67dd-469f-44de-9760-bb58338594d7" (UID: "090a67dd-469f-44de-9760-bb58338594d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482584 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/090a67dd-469f-44de-9760-bb58338594d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482624 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482638 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjktz\" (UniqueName: \"kubernetes.io/projected/090a67dd-469f-44de-9760-bb58338594d7-kube-api-access-zjktz\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.482651 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/090a67dd-469f-44de-9760-bb58338594d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656105 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656454 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656476 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656497 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656512 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656532 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656544 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656597 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656615 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656658 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656672 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-content" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656688 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656701 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656725 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656737 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656754 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656766 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: E0127 18:09:34.656783 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656794 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="extract-utilities" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656972 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1e3321-a7c6-4910-adec-31bf7b3c8f0a" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.656997 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="090a67dd-469f-44de-9760-bb58338594d7" containerName="route-controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657014 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" containerName="controller-manager" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657029 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c7f489-c85f-47d4-9ef7-d0f9aba0cc19" containerName="registry-server" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657048 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4e2708-c8cf-4967-a8f7-a86a5e2a7f09" containerName="pruner" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.657611 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.662678 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.663846 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.676799 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.678316 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685144 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685205 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.685228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.692922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" event={"ID":"02a0c38d-5a4e-4189-86b8-6a42930553a2","Type":"ContainerDied","Data":"c20e1125c69187397fd3d90a194c9f5e0105aa13b720cd7fbb8a8dbc1ba84399"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.693257 4907 scope.go:117] "RemoveContainer" containerID="4e1e30e060260bb9949f48d6023ec4e2c5730e0958cb3c17d65dc70097744cb0" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.693526 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68686db9cf-mskkh" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.699806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" event={"ID":"090a67dd-469f-44de-9760-bb58338594d7","Type":"ContainerDied","Data":"ef0720fe87e3f12e59dc7091d752de57214cbed73b14c8a59f3d4b6af6479d52"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.699904 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.707293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerStarted","Data":"c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.715784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerStarted","Data":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.725362 4907 scope.go:117] "RemoveContainer" containerID="af0aaa68d6a7d39bc50e64ab3608b0b7067cb3ee76f1074863d96a6211612143" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.744718 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glcgf" podStartSLOduration=3.871855473 podStartE2EDuration="1m19.744696011s" podCreationTimestamp="2026-01-27 18:08:15 +0000 UTC" firstStartedPulling="2026-01-27 18:08:17.68214653 +0000 UTC m=+152.811429142" lastFinishedPulling="2026-01-27 18:09:33.554987068 +0000 UTC m=+228.684269680" observedRunningTime="2026-01-27 18:09:34.738971733 +0000 UTC m=+229.868254345" watchObservedRunningTime="2026-01-27 18:09:34.744696011 +0000 UTC m=+229.873978633" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.758388 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jhwph" podStartSLOduration=3.830156403 podStartE2EDuration="1m18.758364161s" podCreationTimestamp="2026-01-27 18:08:16 +0000 UTC" firstStartedPulling="2026-01-27 18:08:18.620411474 +0000 UTC m=+153.749694086" lastFinishedPulling="2026-01-27 18:09:33.548619232 +0000 UTC m=+228.677901844" observedRunningTime="2026-01-27 18:09:34.758218237 +0000 UTC m=+229.887500849" watchObservedRunningTime="2026-01-27 18:09:34.758364161 +0000 UTC m=+229.887646773" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.761138 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.776185 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.785499 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679d4f688f-zzf7p"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786252 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786290 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786419 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786501 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.786515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.788014 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.788287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.789734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.791108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.791159 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.792720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.796177 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68686db9cf-mskkh"] Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.806300 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"route-controller-manager-7cb9b55fc9-6sdxk\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.810749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"controller-manager-6656ff6484-mr4x9\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.978119 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:34 crc kubenswrapper[4907]: I0127 18:09:34.988148 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.261807 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.389915 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:09:35 crc kubenswrapper[4907]: W0127 18:09:35.399766 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09e10c2d_9dea_4d6c_9d36_feb0fdd0df13.slice/crio-0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f WatchSource:0}: Error finding container 0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f: Status 404 returned error can't find the container with id 0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724450 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerStarted","Data":"2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724972 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.724994 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerStarted","Data":"e14ea8d0f79988b247be75c0c550ba68530c65e3005c05e315cd0f64e6973a7d"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.726712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerStarted","Data":"d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.726750 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerStarted","Data":"0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f"} Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.727825 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.730084 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.753991 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" podStartSLOduration=2.753974983 podStartE2EDuration="2.753974983s" podCreationTimestamp="2026-01-27 18:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:35.753671134 +0000 UTC m=+230.882953756" watchObservedRunningTime="2026-01-27 18:09:35.753974983 +0000 UTC m=+230.883257585" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.760419 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a0c38d-5a4e-4189-86b8-6a42930553a2" path="/var/lib/kubelet/pods/02a0c38d-5a4e-4189-86b8-6a42930553a2/volumes" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.761202 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090a67dd-469f-44de-9760-bb58338594d7" path="/var/lib/kubelet/pods/090a67dd-469f-44de-9760-bb58338594d7/volumes" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.821708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.821767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.872120 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:35 crc kubenswrapper[4907]: I0127 18:09:35.893107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" podStartSLOduration=2.893088156 podStartE2EDuration="2.893088156s" podCreationTimestamp="2026-01-27 18:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:09:35.852326082 +0000 UTC m=+230.981608694" watchObservedRunningTime="2026-01-27 18:09:35.893088156 +0000 UTC m=+231.022370768" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.045436 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.045575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.096310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.132194 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.292333 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln"] Jan 27 18:09:36 crc kubenswrapper[4907]: I0127 18:09:36.774474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:09:37 crc kubenswrapper[4907]: I0127 18:09:37.157684 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:37 crc kubenswrapper[4907]: I0127 18:09:37.157744 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:38 crc kubenswrapper[4907]: I0127 18:09:38.214988 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:09:38 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:09:38 crc kubenswrapper[4907]: > Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.326229 4907 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.328956 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.330200 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348503 4907 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348974 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348995 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.348997 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.349049 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.349005 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" gracePeriod=15 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350424 4907 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350819 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350841 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350866 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350878 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350908 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.350922 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.350948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351148 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351163 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351174 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351192 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351204 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351372 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351389 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351451 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351469 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351483 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.351677 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351693 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.351852 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.385038 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515467 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515579 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.515997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516177 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516273 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.516340 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617318 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617420 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617424 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617503 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617519 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617607 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617528 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.617918 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.676941 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:09:42 crc kubenswrapper[4907]: W0127 18:09:42.706657 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6 WatchSource:0}: Error finding container ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6: Status 404 returned error can't find the container with id ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6 Jan 27 18:09:42 crc kubenswrapper[4907]: E0127 18:09:42.710844 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.782487 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.784802 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785519 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785565 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785575 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785584 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" exitCode=2 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.785608 4907 scope.go:117] "RemoveContainer" containerID="51b2df316164ca421dc8818adc5fcce5c12057e5058304840bcb49e6dab335d9" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.787591 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ce6aa49c5f80915eca105233b215e5429a3280f34e6be84c866fbb5a8f6807e6"} Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.789380 4907 generic.go:334] "Generic (PLEG): container finished" podID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerID="84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3" exitCode=0 Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.789479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerDied","Data":"84d1252629f94f6052dc7e9f370b57a2a08987285cf9a2ccd0aa7378e581c2a3"} Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.790314 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.790655 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:42 crc kubenswrapper[4907]: I0127 18:09:42.791131 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.799580 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232"} Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.800439 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.801913 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:43 crc kubenswrapper[4907]: I0127 18:09:43.802687 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.179836 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.180738 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.181422 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342687 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342769 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") pod \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\" (UID: \"9f4bcf33-f579-4173-afa7-055fe0ed0e8b\") " Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342804 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.342861 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock" (OuterVolumeSpecName: "var-lock") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.343754 4907 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.343816 4907 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.344438 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.345195 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.345828 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.346284 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.352461 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f4bcf33-f579-4173-afa7-055fe0ed0e8b" (UID: "9f4bcf33-f579-4173-afa7-055fe0ed0e8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.445475 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4bcf33-f579-4173-afa7-055fe0ed0e8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.814313 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.815416 4907 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" exitCode=0 Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9f4bcf33-f579-4173-afa7-055fe0ed0e8b","Type":"ContainerDied","Data":"c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce"} Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818882 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2422fce3252ef87a2db89e76b11dc4004496dc8add034465ec5552dbf70bdce" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.818845 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.834506 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.834953 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:44 crc kubenswrapper[4907]: I0127 18:09:44.835310 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.065713 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.065958 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066202 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066405 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066642 4907 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.066672 4907 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.066865 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.260122 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.261882 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.262602 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263087 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263518 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.263954 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.267432 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459301 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459421 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459439 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459487 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459600 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459926 4907 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459951 4907 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.459970 4907 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:09:45 crc kubenswrapper[4907]: E0127 18:09:45.668020 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.753517 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.754533 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.755060 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.755355 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.764289 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.841684 4907 scope.go:117] "RemoveContainer" containerID="5a1e13c462edfc55ca6293da8e0f6fa2dfa8e73b6cad23c7a8d8628ff650ab01" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.841887 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.843547 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844089 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844456 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.844864 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.847610 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.848066 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.848730 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.849883 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.863001 4907 scope.go:117] "RemoveContainer" containerID="46f0baeaebe27363b66bb4abee257bac168d19aa9bf4fec93240ef68831f7227" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.897925 4907 scope.go:117] "RemoveContainer" containerID="992cda334699eee8784374d390b2ce75f90280cf3f8dd816e761d476b4ab927c" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.914884 4907 scope.go:117] "RemoveContainer" containerID="aec67df92b86c1501fead4cbe7e1cacf79aa22195bd694c6d173fa559e7c4531" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.931467 4907 scope.go:117] "RemoveContainer" containerID="e8d39fa816493d852b92c2c451128f4174ebc35c5974a49dc17536a97636dd80" Jan 27 18:09:45 crc kubenswrapper[4907]: I0127 18:09:45.951618 4907 scope.go:117] "RemoveContainer" containerID="20efd8e5ec34e5f0f75c2c63dfbcb5ba342998d7183b7b42a02daab2616c3728" Jan 27 18:09:46 crc kubenswrapper[4907]: E0127 18:09:46.005953 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.082756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.083761 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084175 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084648 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.084976 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: I0127 18:09:46.085225 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:46 crc kubenswrapper[4907]: E0127 18:09:46.468631 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.204981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.205620 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.206680 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207185 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207607 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.207991 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.208316 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.239781 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.240632 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241029 4907 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241489 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.241911 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.242196 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[4907]: I0127 18:09:47.242611 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:48 crc kubenswrapper[4907]: E0127 18:09:48.070008 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 27 18:09:51 crc kubenswrapper[4907]: E0127 18:09:51.270704 4907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="6.4s" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.747979 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.752426 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.753227 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.754113 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.754844 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.763659 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.764583 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.765254 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.765806 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.766399 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.766989 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.774179 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.774234 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:55 crc kubenswrapper[4907]: E0127 18:09:55.774860 4907 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: I0127 18:09:55.775961 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:55 crc kubenswrapper[4907]: W0127 18:09:55.813880 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a WatchSource:0}: Error finding container eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a: Status 404 returned error can't find the container with id eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a Jan 27 18:09:56 crc kubenswrapper[4907]: E0127 18:09:56.008067 4907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea8dfc92f2dcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,LastTimestamp:2026-01-27 18:09:42.709669327 +0000 UTC m=+237.838951939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493834 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493902 4907 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" exitCode=1 Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.493990 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.494648 4907 scope.go:117] "RemoveContainer" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.494966 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.495273 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.495704 4907 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.496092 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.496849 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497410 4907 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4520c50f3526756fe670efd97cec6f84614e2659a6a8da7357dfa1fdf34161f4" exitCode=0 Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497489 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4520c50f3526756fe670efd97cec6f84614e2659a6a8da7357dfa1fdf34161f4"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eec1b20d04e0fd17fb0a6d60bb6bfbef5979d6efe2aeb83889907a82cec31d3a"} Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497506 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497930 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.497954 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:56 crc kubenswrapper[4907]: E0127 18:09:56.498275 4907 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.498608 4907 status_manager.go:851] "Failed to get status for pod" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.499138 4907 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.499708 4907 status_manager.go:851] "Failed to get status for pod" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" pod="openshift-marketplace/certified-operators-b7l4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b7l4d\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.500305 4907 status_manager.go:851] "Failed to get status for pod" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" pod="openshift-marketplace/redhat-marketplace-glcgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-glcgf\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.501174 4907 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:56 crc kubenswrapper[4907]: I0127 18:09:56.502013 4907 status_manager.go:851] "Failed to get status for pod" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" pod="openshift-marketplace/redhat-operators-jhwph" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhwph\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.516214 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.516327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbfabf6bdb370d8ddd4e8c144d8680688298a9eb3d89e99d995d9e8dbdbcdb98"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a7c29a98be59fbfc6081c6126c1b422e6d10a46006b1798155b83b6f83ad77f5"} Jan 27 18:09:57 crc kubenswrapper[4907]: I0127 18:09:57.520301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0bcc01cfb7779933ae708a32a8992b37eaaead0a4a3626177097b65bbf6f4c1b"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.528709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a28f657812abf772ebda7da819e5f32dac72bfcedc3e01636e6799a57bd0e649"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529073 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8db4893b9a3883d33aaad0c6486c434baac421647622d5d28810bed69c579c31"} Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529306 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529319 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:09:58 crc kubenswrapper[4907]: I0127 18:09:58.529516 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.776735 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.777285 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.778081 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:00 crc kubenswrapper[4907]: I0127 18:10:00.785755 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.328527 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" containerID="cri-o://764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" gracePeriod=15 Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.547502 4907 generic.go:334] "Generic (PLEG): container finished" podID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerID="764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" exitCode=0 Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.547637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerDied","Data":"764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc"} Jan 27 18:10:01 crc kubenswrapper[4907]: I0127 18:10:01.886756 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063190 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063258 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063289 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063313 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063353 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063476 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063508 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063535 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063592 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063628 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063650 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") pod \"df82c5b4-85d6-4b74-85f5-46d598058d2d\" (UID: \"df82c5b4-85d6-4b74-85f5-46d598058d2d\") " Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.063917 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.064536 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.065041 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.066156 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.068483 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.070720 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.071164 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.072273 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.072692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs" (OuterVolumeSpecName: "kube-api-access-vdmhs") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "kube-api-access-vdmhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.074954 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.075357 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.077987 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "df82c5b4-85d6-4b74-85f5-46d598058d2d" (UID: "df82c5b4-85d6-4b74-85f5-46d598058d2d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165165 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdmhs\" (UniqueName: \"kubernetes.io/projected/df82c5b4-85d6-4b74-85f5-46d598058d2d-kube-api-access-vdmhs\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165220 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165244 4907 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165267 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165419 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165442 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165461 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165480 4907 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df82c5b4-85d6-4b74-85f5-46d598058d2d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165504 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165531 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165628 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165654 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165682 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.165710 4907 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df82c5b4-85d6-4b74-85f5-46d598058d2d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" event={"ID":"df82c5b4-85d6-4b74-85f5-46d598058d2d","Type":"ContainerDied","Data":"4a8097cce43ecee42c97c1d9ab5869697b268e0b34ef8036d5f9d6948ff49dc9"} Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557316 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lg6ln" Jan 27 18:10:02 crc kubenswrapper[4907]: I0127 18:10:02.557346 4907 scope.go:117] "RemoveContainer" containerID="764bfb723ebdd0c728f2ec4cbdbb8ff8d31c71769392ab7b2e1ccf580ddc01dc" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.541265 4907 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.564545 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.564586 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:03 crc kubenswrapper[4907]: I0127 18:10:03.569043 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:04 crc kubenswrapper[4907]: I0127 18:10:04.572051 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:04 crc kubenswrapper[4907]: I0127 18:10:04.572104 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:05 crc kubenswrapper[4907]: I0127 18:10:05.768683 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ac3d38ac-9819-47d6-9772-28743485f643" Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267085 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267513 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:06 crc kubenswrapper[4907]: I0127 18:10:06.267582 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:13 crc kubenswrapper[4907]: I0127 18:10:13.301538 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.164022 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.357435 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.630929 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:10:14 crc kubenswrapper[4907]: I0127 18:10:14.674998 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.059695 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.073902 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.086103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.489600 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.641322 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.820999 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:10:15 crc kubenswrapper[4907]: I0127 18:10:15.903121 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.149379 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.154936 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.215977 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.266961 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.267042 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.302923 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.328391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.444872 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.446246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.465082 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.496921 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.527912 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.535712 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.583333 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.657128 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.690103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.843661 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.962290 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:10:16 crc kubenswrapper[4907]: I0127 18:10:16.967048 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.081465 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.117638 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.259149 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.259595 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.325256 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.364030 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.373419 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.405146 4907 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.422987 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.426779 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.552814 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.582621 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.820372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.821713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.950725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:10:17 crc kubenswrapper[4907]: I0127 18:10:17.959364 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.075548 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.148594 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.320312 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.386596 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.408602 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.453073 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.549831 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.581057 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.591000 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.616757 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.662338 4907 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.665143 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.669534 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.698972 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.699457 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.711110 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.740360 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:10:18 crc kubenswrapper[4907]: I0127 18:10:18.925103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.000791 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.019091 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.064791 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.086484 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.137097 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.341980 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.342759 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.393207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.464654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.497949 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.526476 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.527396 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.617207 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.769658 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.809893 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.860681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:10:19 crc kubenswrapper[4907]: I0127 18:10:19.896753 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.005757 4907 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.014128 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.037504 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.052709 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.054309 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.055138 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.074974 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.096701 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.101683 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.122913 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.170585 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.178965 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.196008 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.201834 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.545175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.597525 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.643370 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.676792 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:10:20 crc kubenswrapper[4907]: I0127 18:10:20.678375 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.019384 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.114236 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.169222 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.227120 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.239454 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.326978 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.357583 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.474497 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.606426 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.622969 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.635128 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.647227 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.718480 4907 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.744577 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.759987 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.802281 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.925010 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.950871 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:10:21 crc kubenswrapper[4907]: I0127 18:10:21.978890 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.015618 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.119750 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.120435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.120807 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.136137 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.139397 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.145344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.179220 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.206063 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.267548 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.351321 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.690700 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.770687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.800060 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:10:22 crc kubenswrapper[4907]: I0127 18:10:22.981720 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.077034 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.247577 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.319390 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.376774 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.397177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.402073 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.412384 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.431477 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.507270 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.592075 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.622246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.659343 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.680605 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.767434 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.903792 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:10:23 crc kubenswrapper[4907]: I0127 18:10:23.907122 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.033637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.034087 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.129353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.147173 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.201869 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.275821 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.275887 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.337997 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.372980 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.678245 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.778178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.801271 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.839809 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:10:24 crc kubenswrapper[4907]: I0127 18:10:24.866515 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.004695 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.036697 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.053542 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.136306 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.180569 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.209639 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.220899 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.300196 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.357864 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.639270 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.661699 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.704312 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.775755 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.935658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:10:25 crc kubenswrapper[4907]: I0127 18:10:25.990884 4907 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.009457 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.084940 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267197 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267307 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.267385 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.268871 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.269091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f" gracePeriod=30 Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.292712 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.302262 4907 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.306754 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.306726951 podStartE2EDuration="44.306726951s" podCreationTimestamp="2026-01-27 18:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:03.401674505 +0000 UTC m=+258.530957117" watchObservedRunningTime="2026-01-27 18:10:26.306726951 +0000 UTC m=+281.436009613" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311363 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lg6ln","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311445 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:10:26 crc kubenswrapper[4907]: E0127 18:10:26.311795 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311832 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: E0127 18:10:26.311876 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311895 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.311975 4907 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312005 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3972e3bc-1760-4cb8-b2d0-6758a782c079" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312100 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4bcf33-f579-4173-afa7-055fe0ed0e8b" containerName="installer" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312139 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" containerName="oauth-openshift" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.312953 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.316313 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.316983 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.320654 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.321681 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.324317 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.324352 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326424 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326532 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326633 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.326664 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327156 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327395 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327599 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.327637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.336738 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.342957 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.345912 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.358194 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.359937 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.359917841 podStartE2EDuration="23.359917841s" podCreationTimestamp="2026-01-27 18:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:26.355997545 +0000 UTC m=+281.485280177" watchObservedRunningTime="2026-01-27 18:10:26.359917841 +0000 UTC m=+281.489200463" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.361535 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449533 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449699 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.449747 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450150 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450236 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450290 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450327 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450379 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.450514 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.527385 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551511 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551607 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551730 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551816 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.551943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.552017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.552602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-dir\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.553778 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-audit-policies\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.554230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-service-ca\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.555033 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.555692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.562750 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.562806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-login\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.564320 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-error\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.565066 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.566732 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.567176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-session\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.567472 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-router-certs\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.569130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.575351 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.580437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxtc\" (UniqueName: \"kubernetes.io/projected/b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f-kube-api-access-dfxtc\") pod \"oauth-openshift-788784fd4b-j7f9b\" (UID: \"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f\") " pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.598796 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.639642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.676231 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.696809 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.829225 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.894139 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:10:26 crc kubenswrapper[4907]: I0127 18:10:26.982215 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.027637 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.037356 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.062133 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.137080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.210226 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.321430 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.390714 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.400491 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.418983 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.483413 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b"] Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.487226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.505203 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.635974 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.655099 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.757349 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.764449 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df82c5b4-85d6-4b74-85f5-46d598058d2d" path="/var/lib/kubelet/pods/df82c5b4-85d6-4b74-85f5-46d598058d2d/volumes" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.788022 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.804189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.871079 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.876268 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.930142 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788784fd4b-j7f9b"] Jan 27 18:10:27 crc kubenswrapper[4907]: W0127 18:10:27.938196 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a8fcf5_2457_47d4_9f00_6aad27a2cc1f.slice/crio-5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac WatchSource:0}: Error finding container 5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac: Status 404 returned error can't find the container with id 5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac Jan 27 18:10:27 crc kubenswrapper[4907]: I0127 18:10:27.973105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.197089 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.200938 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.293994 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.299305 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.357191 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.412737 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.616226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.714493 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.720813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.720880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"5a3de8a221b93fcb364733e60213572f5561d2a3993d4cb81c4a85c921baddac"} Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.721208 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.728744 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.748444 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podStartSLOduration=52.748423789 podStartE2EDuration="52.748423789s" podCreationTimestamp="2026-01-27 18:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:10:28.744461852 +0000 UTC m=+283.873744464" watchObservedRunningTime="2026-01-27 18:10:28.748423789 +0000 UTC m=+283.877706401" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.758279 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:10:28 crc kubenswrapper[4907]: I0127 18:10:28.848769 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.038685 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.041894 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.242778 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.251257 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.290835 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.413015 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.495796 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.516542 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.658034 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.910095 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.949439 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:10:29 crc kubenswrapper[4907]: I0127 18:10:29.989426 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.263913 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.442946 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:10:30 crc kubenswrapper[4907]: I0127 18:10:30.567150 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:10:37 crc kubenswrapper[4907]: I0127 18:10:37.421861 4907 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:37 crc kubenswrapper[4907]: I0127 18:10:37.422660 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" gracePeriod=5 Jan 27 18:10:42 crc kubenswrapper[4907]: I0127 18:10:42.822757 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:42 crc kubenswrapper[4907]: I0127 18:10:42.823534 4907 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" exitCode=137 Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.026753 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.027238 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096845 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096925 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096963 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.096982 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097025 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097069 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097282 4907 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097307 4907 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097323 4907 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.097366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.112085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.197941 4907 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.197975 4907 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.760129 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.760430 4907 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.778924 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.778978 4907 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bda6022-73ad-4f22-80f9-94f18a1c9d59" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.785958 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.786012 4907 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3bda6022-73ad-4f22-80f9-94f18a1c9d59" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838008 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838124 4907 scope.go:117] "RemoveContainer" containerID="53468f14c49ef8880b385ebe2e20251fed1e504dc30d1d9a335aca847959a232" Jan 27 18:10:43 crc kubenswrapper[4907]: I0127 18:10:43.838272 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.850663 4907 generic.go:334] "Generic (PLEG): container finished" podID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" exitCode=0 Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.850780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} Jan 27 18:10:44 crc kubenswrapper[4907]: I0127 18:10:44.852207 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.490501 4907 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.861466 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerStarted","Data":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.862475 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:10:45 crc kubenswrapper[4907]: I0127 18:10:45.865535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.938475 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942201 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942269 4907 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f" exitCode=137 Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e6c150e0fe85afdfda2661378a17099174c098af26dca09f65158be57ca5572f"} Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942371 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8beceedec1766937d7e161638b23c21838a345d08e38ed80357dfb2e4490308a"} Jan 27 18:10:56 crc kubenswrapper[4907]: I0127 18:10:56.942400 4907 scope.go:117] "RemoveContainer" containerID="56f5f5cdfe627a17c529226d438ec710735031f0107284a5054f4c81f12b2909" Jan 27 18:10:57 crc kubenswrapper[4907]: I0127 18:10:57.949784 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 18:11:00 crc kubenswrapper[4907]: I0127 18:11:00.777935 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:02 crc kubenswrapper[4907]: I0127 18:11:02.882034 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:02 crc kubenswrapper[4907]: I0127 18:11:02.882708 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glcgf" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" containerID="cri-o://48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" gracePeriod=2 Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.312818 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.496903 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.496997 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.497117 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") pod \"ed699310-2f9f-414f-ad04-7778af36ddb7\" (UID: \"ed699310-2f9f-414f-ad04-7778af36ddb7\") " Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.498539 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities" (OuterVolumeSpecName: "utilities") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.506166 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6" (OuterVolumeSpecName: "kube-api-access-8wsq6") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "kube-api-access-8wsq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.522304 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed699310-2f9f-414f-ad04-7778af36ddb7" (UID: "ed699310-2f9f-414f-ad04-7778af36ddb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598917 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598970 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wsq6\" (UniqueName: \"kubernetes.io/projected/ed699310-2f9f-414f-ad04-7778af36ddb7-kube-api-access-8wsq6\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.598986 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed699310-2f9f-414f-ad04-7778af36ddb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.990895 4907 generic.go:334] "Generic (PLEG): container finished" podID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" exitCode=0 Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.990963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcgf" event={"ID":"ed699310-2f9f-414f-ad04-7778af36ddb7","Type":"ContainerDied","Data":"6e4bfa4b2124a87f7a84ae1d0c9f804ceb0bde1aa683068bc3c70fac1b397adf"} Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991027 4907 scope.go:117] "RemoveContainer" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:03 crc kubenswrapper[4907]: I0127 18:11:03.991060 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcgf" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.020020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.021718 4907 scope.go:117] "RemoveContainer" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.027331 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcgf"] Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.043946 4907 scope.go:117] "RemoveContainer" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.069931 4907 scope.go:117] "RemoveContainer" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.070661 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": container with ID starting with 48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303 not found: ID does not exist" containerID="48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.070782 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303"} err="failed to get container status \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": rpc error: code = NotFound desc = could not find container \"48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303\": container with ID starting with 48866e50fdf4301ce7cd19c746677672db8d7e58e306b53f20c56808422f6303 not found: ID does not exist" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.070833 4907 scope.go:117] "RemoveContainer" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.071288 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": container with ID starting with 8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185 not found: ID does not exist" containerID="8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071331 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185"} err="failed to get container status \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": rpc error: code = NotFound desc = could not find container \"8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185\": container with ID starting with 8c92f3dce2ef061d24149c9e2f8f6d82a5c9033ba2df41a59f21641a7cedf185 not found: ID does not exist" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071359 4907 scope.go:117] "RemoveContainer" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: E0127 18:11:04.071817 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": container with ID starting with 41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13 not found: ID does not exist" containerID="41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13" Jan 27 18:11:04 crc kubenswrapper[4907]: I0127 18:11:04.071856 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13"} err="failed to get container status \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": rpc error: code = NotFound desc = could not find container \"41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13\": container with ID starting with 41947a6f48bdfd2cc180e424e3f3f6791332b66e9aa8f3b8d5100724a4a9ec13 not found: ID does not exist" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.085820 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.087691 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b7l4d" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" containerID="cri-o://99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" gracePeriod=2 Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.498219 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630681 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630772 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.630838 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") pod \"f317b8ef-4875-4f24-8926-8efd5826a51e\" (UID: \"f317b8ef-4875-4f24-8926-8efd5826a51e\") " Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.632144 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities" (OuterVolumeSpecName: "utilities") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.640888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68" (OuterVolumeSpecName: "kube-api-access-9dz68") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "kube-api-access-9dz68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.680413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f317b8ef-4875-4f24-8926-8efd5826a51e" (UID: "f317b8ef-4875-4f24-8926-8efd5826a51e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732195 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dz68\" (UniqueName: \"kubernetes.io/projected/f317b8ef-4875-4f24-8926-8efd5826a51e-kube-api-access-9dz68\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732267 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.732296 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f317b8ef-4875-4f24-8926-8efd5826a51e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:05 crc kubenswrapper[4907]: I0127 18:11:05.757524 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" path="/var/lib/kubelet/pods/ed699310-2f9f-414f-ad04-7778af36ddb7/volumes" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007049 4907 generic.go:334] "Generic (PLEG): container finished" podID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" exitCode=0 Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007107 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7l4d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007126 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007609 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7l4d" event={"ID":"f317b8ef-4875-4f24-8926-8efd5826a51e","Type":"ContainerDied","Data":"381af3184b48628759e0e418b748e32d55bc4e48955c79f0bca42f10d1b84973"} Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.007631 4907 scope.go:117] "RemoveContainer" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.024597 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.027081 4907 scope.go:117] "RemoveContainer" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.028154 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b7l4d"] Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.052408 4907 scope.go:117] "RemoveContainer" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.065775 4907 scope.go:117] "RemoveContainer" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066101 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": container with ID starting with 99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392 not found: ID does not exist" containerID="99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066153 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392"} err="failed to get container status \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": rpc error: code = NotFound desc = could not find container \"99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392\": container with ID starting with 99d6789a7d5ac9b8c4f72255ba603b54b41a3ccd7daf428fbeeff3e7b5114392 not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066186 4907 scope.go:117] "RemoveContainer" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066479 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": container with ID starting with f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685 not found: ID does not exist" containerID="f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066511 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685"} err="failed to get container status \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": rpc error: code = NotFound desc = could not find container \"f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685\": container with ID starting with f4351a8e44a6977e2561f1e9ac6aa549170b3c8fae94ee9c877e1410a181d685 not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066535 4907 scope.go:117] "RemoveContainer" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: E0127 18:11:06.066830 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": container with ID starting with 98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d not found: ID does not exist" containerID="98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.066853 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d"} err="failed to get container status \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": rpc error: code = NotFound desc = could not find container \"98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d\": container with ID starting with 98b9129cbdf3200f9debd8d0083bd69eccf6a4e15f4ded82649a33a7b408262d not found: ID does not exist" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.267028 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:06 crc kubenswrapper[4907]: I0127 18:11:06.271039 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:07 crc kubenswrapper[4907]: I0127 18:11:07.018248 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:11:07 crc kubenswrapper[4907]: I0127 18:11:07.760000 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" path="/var/lib/kubelet/pods/f317b8ef-4875-4f24-8926-8efd5826a51e/volumes" Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.856362 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.857364 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" containerID="cri-o://2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" gracePeriod=30 Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.863301 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:16 crc kubenswrapper[4907]: I0127 18:11:16.863524 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" containerID="cri-o://d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" gracePeriod=30 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.115636 4907 generic.go:334] "Generic (PLEG): container finished" podID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerID="2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" exitCode=0 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.117893 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerDied","Data":"2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06"} Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.149218 4907 generic.go:334] "Generic (PLEG): container finished" podID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerID="d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" exitCode=0 Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.149279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerDied","Data":"d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de"} Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.277070 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.387717 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404230 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404304 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404354 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404385 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.404455 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") pod \"68c8acc2-637c-4006-848e-bed0c1ea77fc\" (UID: \"68c8acc2-637c-4006-848e-bed0c1ea77fc\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405297 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405424 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config" (OuterVolumeSpecName: "config") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.405514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.412007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb" (OuterVolumeSpecName: "kube-api-access-9r5tb") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "kube-api-access-9r5tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.416204 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68c8acc2-637c-4006-848e-bed0c1ea77fc" (UID: "68c8acc2-637c-4006-848e-bed0c1ea77fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506047 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506118 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506150 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") pod \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\" (UID: \"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13\") " Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506426 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68c8acc2-637c-4006-848e-bed0c1ea77fc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506442 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506453 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506465 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c8acc2-637c-4006-848e-bed0c1ea77fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.506476 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r5tb\" (UniqueName: \"kubernetes.io/projected/68c8acc2-637c-4006-848e-bed0c1ea77fc-kube-api-access-9r5tb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.507354 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca" (OuterVolumeSpecName: "client-ca") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.507520 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config" (OuterVolumeSpecName: "config") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.510935 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb" (OuterVolumeSpecName: "kube-api-access-qlgrb") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "kube-api-access-qlgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.510975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" (UID: "09e10c2d-9dea-4d6c-9d36-feb0fdd0df13"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607847 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgrb\" (UniqueName: \"kubernetes.io/projected/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-kube-api-access-qlgrb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607894 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607908 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:17 crc kubenswrapper[4907]: I0127 18:11:17.607922 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.156485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" event={"ID":"09e10c2d-9dea-4d6c-9d36-feb0fdd0df13","Type":"ContainerDied","Data":"0a82a281ef0946127a91cd85165ac47e78c9805c96825d5e1f93d6916e167d0f"} Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.156564 4907 scope.go:117] "RemoveContainer" containerID="d05e8f413d12c557899d680b62b203c67b646770dae02b4ad98bf6608a23a5de" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158246 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" event={"ID":"68c8acc2-637c-4006-848e-bed0c1ea77fc","Type":"ContainerDied","Data":"e14ea8d0f79988b247be75c0c550ba68530c65e3005c05e315cd0f64e6973a7d"} Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158301 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6656ff6484-mr4x9" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.158546 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.179630 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.180051 4907 scope.go:117] "RemoveContainer" containerID="2942b6d9edb2e80cf23dc546bc0f39c3e93845fd655de437812a42a8ae231f06" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.182765 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6656ff6484-mr4x9"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.192905 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.197304 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb9b55fc9-6sdxk"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.777385 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778134 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778157 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778174 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778184 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778202 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778211 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778225 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778233 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778248 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778256 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="extract-content" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778270 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778278 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778288 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778296 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="extract-utilities" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778308 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: E0127 18:11:18.778329 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778337 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778451 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f317b8ef-4875-4f24-8926-8efd5826a51e" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778469 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" containerName="route-controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778502 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" containerName="controller-manager" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778521 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.778532 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed699310-2f9f-414f-ad04-7778af36ddb7" containerName="registry-server" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.779017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.783201 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784567 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784832 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.784848 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.785105 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.785366 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.786893 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.787666 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.790348 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792344 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792397 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792579 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792701 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.792473 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.795226 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.799743 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.833673 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928402 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928430 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928500 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.928541 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:18 crc kubenswrapper[4907]: I0127 18:11:18.929344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.030551 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.030986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.031014 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032243 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-config\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032626 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.032669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033477 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b0a63e6-0f9c-42b7-8006-fbd93909482e-client-ca\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033648 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.033658 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.034755 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.039139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.044395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b0a63e6-0f9c-42b7-8006-fbd93909482e-serving-cert\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.047064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"controller-manager-7c45df54bf-7mdzc\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.048754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrb2t\" (UniqueName: \"kubernetes.io/projected/4b0a63e6-0f9c-42b7-8006-fbd93909482e-kube-api-access-rrb2t\") pod \"route-controller-manager-8c88b6f67-gq6zl\" (UID: \"4b0a63e6-0f9c-42b7-8006-fbd93909482e\") " pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.102947 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.111173 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.323128 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl"] Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.435949 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.756734 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e10c2d-9dea-4d6c-9d36-feb0fdd0df13" path="/var/lib/kubelet/pods/09e10c2d-9dea-4d6c-9d36-feb0fdd0df13/volumes" Jan 27 18:11:19 crc kubenswrapper[4907]: I0127 18:11:19.758083 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c8acc2-637c-4006-848e-bed0c1ea77fc" path="/var/lib/kubelet/pods/68c8acc2-637c-4006-848e-bed0c1ea77fc/volumes" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175234 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" event={"ID":"4b0a63e6-0f9c-42b7-8006-fbd93909482e","Type":"ContainerStarted","Data":"5bfb8f59620c18f94e4f2d606be33ed7a3f153e53b15c3e01d3af035f4226fda"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175295 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" event={"ID":"4b0a63e6-0f9c-42b7-8006-fbd93909482e","Type":"ContainerStarted","Data":"235dbd8cb9b8deef26f39c83aed756e442aa6ab8bf9d6a9b5d7614669590af61"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.175448 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.176983 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerStarted","Data":"37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.177030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerStarted","Data":"18f6e5e3b1ee0f3ca11561596a6ba9391fff07439b9fa26c916078f6a5a21c7a"} Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.177241 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.180048 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.180861 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.198325 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podStartSLOduration=4.19830993 podStartE2EDuration="4.19830993s" podCreationTimestamp="2026-01-27 18:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:20.194285861 +0000 UTC m=+335.323568473" watchObservedRunningTime="2026-01-27 18:11:20.19830993 +0000 UTC m=+335.327592542" Jan 27 18:11:20 crc kubenswrapper[4907]: I0127 18:11:20.247025 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" podStartSLOduration=4.247005525 podStartE2EDuration="4.247005525s" podCreationTimestamp="2026-01-27 18:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:20.242108861 +0000 UTC m=+335.371391473" watchObservedRunningTime="2026-01-27 18:11:20.247005525 +0000 UTC m=+335.376288137" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.016332 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.017360 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" containerID="cri-o://37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.379722 4907 generic.go:334] "Generic (PLEG): container finished" podID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerID="37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" exitCode=0 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.379799 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerDied","Data":"37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d"} Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.459993 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544136 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544250 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544304 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.544343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") pod \"b7c08430-9a0d-4699-8521-3ee5c774ceab\" (UID: \"b7c08430-9a0d-4699-8521-3ee5c774ceab\") " Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545184 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545369 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config" (OuterVolumeSpecName: "config") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545568 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545810 4907 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545829 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.545843 4907 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7c08430-9a0d-4699-8521-3ee5c774ceab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.550136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.550250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx" (OuterVolumeSpecName: "kube-api-access-xmwcx") pod "b7c08430-9a0d-4699-8521-3ee5c774ceab" (UID: "b7c08430-9a0d-4699-8521-3ee5c774ceab"). InnerVolumeSpecName "kube-api-access-xmwcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.646832 4907 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c08430-9a0d-4699-8521-3ee5c774ceab-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.646882 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwcx\" (UniqueName: \"kubernetes.io/projected/b7c08430-9a0d-4699-8521-3ee5c774ceab-kube-api-access-xmwcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.921824 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.922400 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cg67x" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" containerID="cri-o://cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.932237 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.932469 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhc2f" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" containerID="cri-o://f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.947923 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.959348 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.959623 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-klwtz" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" containerID="cri-o://ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" gracePeriod=30 Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.975646 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:53 crc kubenswrapper[4907]: E0127 18:11:53.976014 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976039 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976194 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" containerName="controller-manager" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.976678 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.983498 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.989748 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:53 crc kubenswrapper[4907]: I0127 18:11:53.990055 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jhwph" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" containerID="cri-o://c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" gracePeriod=30 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053736 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.053930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.155276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.156754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.163707 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5564598e-ff23-4f9e-b3de-64e127e94da6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.172587 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnnb\" (UniqueName: \"kubernetes.io/projected/5564598e-ff23-4f9e-b3de-64e127e94da6-kube-api-access-hxnnb\") pod \"marketplace-operator-79b997595-87z2b\" (UID: \"5564598e-ff23-4f9e-b3de-64e127e94da6\") " pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.302202 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.309998 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.358828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.359201 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.359241 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") pod \"7c7f1204-674f-4d4e-a695-28b2d0956b32\" (UID: \"7c7f1204-674f-4d4e-a695-28b2d0956b32\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.360216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities" (OuterVolumeSpecName: "utilities") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.364818 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg" (OuterVolumeSpecName: "kube-api-access-s97lg") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "kube-api-access-s97lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.386528 4907 generic.go:334] "Generic (PLEG): container finished" podID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerID="c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.386606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" event={"ID":"b7c08430-9a0d-4699-8521-3ee5c774ceab","Type":"ContainerDied","Data":"18f6e5e3b1ee0f3ca11561596a6ba9391fff07439b9fa26c916078f6a5a21c7a"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387895 4907 scope.go:117] "RemoveContainer" containerID="37e13818fa5cf2af9c04699ae7b7c069e1a8cbb912cd1173876cf5bdc881089d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.387998 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c45df54bf-7mdzc" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397688 4907 generic.go:334] "Generic (PLEG): container finished" podID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397797 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhc2f" event={"ID":"7c7f1204-674f-4d4e-a695-28b2d0956b32","Type":"ContainerDied","Data":"011be499d8b8d8d22772e72b71e952b3184b41de73c3cfac7cf3219b4b7d08b2"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.397939 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhc2f" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406385 4907 generic.go:334] "Generic (PLEG): container finished" podID="dee6d631-48d1-4137-9736-c028fb27e655" containerID="ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406474 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406510 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klwtz" event={"ID":"dee6d631-48d1-4137-9736-c028fb27e655","Type":"ContainerDied","Data":"9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.406527 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd1ed5f538840fcd6ee0931fbe7a3c96a075f1d06cb90170d9ab15e3188d5a9" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.408312 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.428932 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerID="cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" exitCode=0 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429109 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" containerID="cri-o://3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" gracePeriod=30 Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429409 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.429433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4"} Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.443278 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c45df54bf-7mdzc"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.453962 4907 scope.go:117] "RemoveContainer" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460196 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460311 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460391 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") pod \"dee6d631-48d1-4137-9736-c028fb27e655\" (UID: \"dee6d631-48d1-4137-9736-c028fb27e655\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460626 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s97lg\" (UniqueName: \"kubernetes.io/projected/7c7f1204-674f-4d4e-a695-28b2d0956b32-kube-api-access-s97lg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.460637 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.466723 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg" (OuterVolumeSpecName: "kube-api-access-frltg") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "kube-api-access-frltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.467775 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities" (OuterVolumeSpecName: "utilities") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.489897 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.522965 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c7f1204-674f-4d4e-a695-28b2d0956b32" (UID: "7c7f1204-674f-4d4e-a695-28b2d0956b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.523052 4907 scope.go:117] "RemoveContainer" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.536035 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dee6d631-48d1-4137-9736-c028fb27e655" (UID: "dee6d631-48d1-4137-9736-c028fb27e655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.551518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561534 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561729 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561798 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561826 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") pod \"7ee8faea-87ec-4620-b6a8-db398d35039a\" (UID: \"7ee8faea-87ec-4620-b6a8-db398d35039a\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.561847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") pod \"1f9526ea-3ca9-4727-aadd-3103419511d9\" (UID: \"1f9526ea-3ca9-4727-aadd-3103419511d9\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562036 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7f1204-674f-4d4e-a695-28b2d0956b32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562049 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562059 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frltg\" (UniqueName: \"kubernetes.io/projected/dee6d631-48d1-4137-9736-c028fb27e655-kube-api-access-frltg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.562070 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dee6d631-48d1-4137-9736-c028fb27e655-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.563905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities" (OuterVolumeSpecName: "utilities") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.573507 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg" (OuterVolumeSpecName: "kube-api-access-9khpg") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "kube-api-access-9khpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.574300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities" (OuterVolumeSpecName: "utilities") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.577985 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87z2b"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.612300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb" (OuterVolumeSpecName: "kube-api-access-mthtb") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "kube-api-access-mthtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.622104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ee8faea-87ec-4620-b6a8-db398d35039a" (UID: "7ee8faea-87ec-4620-b6a8-db398d35039a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.641617 4907 scope.go:117] "RemoveContainer" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: W0127 18:11:54.647009 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5564598e_ff23_4f9e_b3de_64e127e94da6.slice/crio-c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf WatchSource:0}: Error finding container c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf: Status 404 returned error can't find the container with id c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663623 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663660 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee8faea-87ec-4620-b6a8-db398d35039a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663704 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9khpg\" (UniqueName: \"kubernetes.io/projected/1f9526ea-3ca9-4727-aadd-3103419511d9-kube-api-access-9khpg\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663714 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthtb\" (UniqueName: \"kubernetes.io/projected/7ee8faea-87ec-4620-b6a8-db398d35039a-kube-api-access-mthtb\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.663723 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.679162 4907 scope.go:117] "RemoveContainer" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.682491 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": container with ID starting with f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d not found: ID does not exist" containerID="f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.682535 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d"} err="failed to get container status \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": rpc error: code = NotFound desc = could not find container \"f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d\": container with ID starting with f9b0f21d9cfba2d482a0d7ac5860af7b974c7c3b3ba39ab1de361a133afb736d not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.682578 4907 scope.go:117] "RemoveContainer" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.683009 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": container with ID starting with 82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5 not found: ID does not exist" containerID="82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.683042 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5"} err="failed to get container status \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": rpc error: code = NotFound desc = could not find container \"82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5\": container with ID starting with 82aace31619b91d5c99902614c0c1656738aa5feada0657018ce01dd86e127a5 not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.683066 4907 scope.go:117] "RemoveContainer" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.683917 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": container with ID starting with 28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2 not found: ID does not exist" containerID="28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.684005 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2"} err="failed to get container status \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": rpc error: code = NotFound desc = could not find container \"28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2\": container with ID starting with 28d683b73c516fd16050038427976efece8058f5945364982b68f8b23b72aba2 not found: ID does not exist" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.740283 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.744937 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhc2f"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.751007 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f9526ea-3ca9-4727-aadd-3103419511d9" (UID: "1f9526ea-3ca9-4727-aadd-3103419511d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.766301 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9526ea-3ca9-4727-aadd-3103419511d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.807794 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808085 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808106 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808123 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808132 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808143 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808152 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808171 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808373 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808385 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808393 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808408 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808419 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808431 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808439 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808452 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808465 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808479 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808488 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808504 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808519 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808527 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-utilities" Jan 27 18:11:54 crc kubenswrapper[4907]: E0127 18:11:54.808537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808546 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="extract-content" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808721 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808732 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808739 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee6d631-48d1-4137-9736-c028fb27e655" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.808751 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" containerName="registry-server" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.809521 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.812819 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.812883 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.813136 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.813618 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.814888 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.815708 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.819214 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.819809 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.839664 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968604 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968648 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") pod \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\" (UID: \"1fb72397-1fbe-4f9d-976a-19ca15b2da2c\") " Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.968950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969135 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.969761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.973540 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:11:54 crc kubenswrapper[4907]: I0127 18:11:54.976262 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs" (OuterVolumeSpecName: "kube-api-access-f5wcs") pod "1fb72397-1fbe-4f9d-976a-19ca15b2da2c" (UID: "1fb72397-1fbe-4f9d-976a-19ca15b2da2c"). InnerVolumeSpecName "kube-api-access-f5wcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070625 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070704 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070778 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070914 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070937 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wcs\" (UniqueName: \"kubernetes.io/projected/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-kube-api-access-f5wcs\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.070956 4907 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1fb72397-1fbe-4f9d-976a-19ca15b2da2c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.071872 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-config\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.072057 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-proxy-ca-bundles\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.072807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-client-ca\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.075125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-serving-cert\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.088651 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9sg\" (UniqueName: \"kubernetes.io/projected/48e5b57d-d01a-441e-beac-ef5e5d74dbc1-kube-api-access-hf9sg\") pod \"controller-manager-9f964d47c-l4mx8\" (UID: \"48e5b57d-d01a-441e-beac-ef5e5d74dbc1\") " pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.138082 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.337745 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f964d47c-l4mx8"] Jan 27 18:11:55 crc kubenswrapper[4907]: W0127 18:11:55.345519 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48e5b57d_d01a_441e_beac_ef5e5d74dbc1.slice/crio-5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261 WatchSource:0}: Error finding container 5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261: Status 404 returned error can't find the container with id 5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261 Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.439406 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" event={"ID":"5564598e-ff23-4f9e-b3de-64e127e94da6","Type":"ContainerStarted","Data":"8def76b22114f6c4d4e31249c2b3f5500d827d1bd3f90a2f15e0e6e4587d70e2"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.439452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" event={"ID":"5564598e-ff23-4f9e-b3de-64e127e94da6","Type":"ContainerStarted","Data":"c864af345f311bbec8b917c99e25bd66f3054900c35241a77f4cfbdfc03948bf"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.440606 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442848 4907 generic.go:334] "Generic (PLEG): container finished" podID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" exitCode=0 Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442889 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" event={"ID":"1fb72397-1fbe-4f9d-976a-19ca15b2da2c","Type":"ContainerDied","Data":"751f6790eddcfff181547cb7090e8c80fd9fdf4c4aa3c45b341c6ab12bb2cee7"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442923 4907 scope.go:117] "RemoveContainer" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.442978 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pn59x" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.445973 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.452254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg67x" event={"ID":"7ee8faea-87ec-4620-b6a8-db398d35039a","Type":"ContainerDied","Data":"327477b6362453b7f241bd4005f967f63bfcd92d60574b597325042d23e6ed02"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.452367 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg67x" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.454227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" event={"ID":"48e5b57d-d01a-441e-beac-ef5e5d74dbc1","Type":"ContainerStarted","Data":"5cb1f44507ad145f99cf35346172d2170711e523729c95d25fa215475a758261"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhwph" event={"ID":"1f9526ea-3ca9-4727-aadd-3103419511d9","Type":"ContainerDied","Data":"a25f71c0e1b8e215c2c97229db7543cf578b69337a997116cde1864efa87346a"} Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458169 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klwtz" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.458174 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhwph" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.465190 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.470587 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podStartSLOduration=2.470547217 podStartE2EDuration="2.470547217s" podCreationTimestamp="2026-01-27 18:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:55.463854749 +0000 UTC m=+370.593137361" watchObservedRunningTime="2026-01-27 18:11:55.470547217 +0000 UTC m=+370.599829859" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.513465 4907 scope.go:117] "RemoveContainer" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: E0127 18:11:55.519649 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": container with ID starting with 3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163 not found: ID does not exist" containerID="3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.519714 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163"} err="failed to get container status \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": rpc error: code = NotFound desc = could not find container \"3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163\": container with ID starting with 3003c24527aa9ef4aa019cf415e75e0bfc2fb096efe50ebbb9b33f491c257163 not found: ID does not exist" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.519754 4907 scope.go:117] "RemoveContainer" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: E0127 18:11:55.523757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": container with ID starting with a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad not found: ID does not exist" containerID="a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523846 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523843 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad"} err="failed to get container status \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": rpc error: code = NotFound desc = could not find container \"a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad\": container with ID starting with a5efc95ce6aab6855574076a242dcb2160eb88dd89e65f9fb745c83fc8cc63ad not found: ID does not exist" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.523874 4907 scope.go:117] "RemoveContainer" containerID="cca800a132dbc0637ef9f8a151d48baa2ebe9b0c352f4e619bea71a73ed6edb4" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.531057 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cg67x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.536467 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.540521 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-klwtz"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.545805 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.548591 4907 scope.go:117] "RemoveContainer" containerID="ed1ff5a394e52796e4f4ec3501247d97700ad989bc354c05f28efa01945dae35" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.548953 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pn59x"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.560531 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.560609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jhwph"] Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.565282 4907 scope.go:117] "RemoveContainer" containerID="be6c8c2b32c82dd2e2cee12f93b1053ef5ddc94b250cbda98a5b387f916f54b6" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.580338 4907 scope.go:117] "RemoveContainer" containerID="c4e1f6d017c07ec4d982e1d85c078c0a9d796f21d3669902a20eff03a671e183" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.593330 4907 scope.go:117] "RemoveContainer" containerID="a32059dda4c689ff3e20fb83b5604a26321637f3f3d16ef0f676c3787ce5589e" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.622364 4907 scope.go:117] "RemoveContainer" containerID="654679c743b9560bbba18b38261b7b4cf9709df04c5818506bb60f06a2ff6062" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.754095 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9526ea-3ca9-4727-aadd-3103419511d9" path="/var/lib/kubelet/pods/1f9526ea-3ca9-4727-aadd-3103419511d9/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.754903 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" path="/var/lib/kubelet/pods/1fb72397-1fbe-4f9d-976a-19ca15b2da2c/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.755369 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7f1204-674f-4d4e-a695-28b2d0956b32" path="/var/lib/kubelet/pods/7c7f1204-674f-4d4e-a695-28b2d0956b32/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.755971 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee8faea-87ec-4620-b6a8-db398d35039a" path="/var/lib/kubelet/pods/7ee8faea-87ec-4620-b6a8-db398d35039a/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.756650 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c08430-9a0d-4699-8521-3ee5c774ceab" path="/var/lib/kubelet/pods/b7c08430-9a0d-4699-8521-3ee5c774ceab/volumes" Jan 27 18:11:55 crc kubenswrapper[4907]: I0127 18:11:55.757447 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee6d631-48d1-4137-9736-c028fb27e655" path="/var/lib/kubelet/pods/dee6d631-48d1-4137-9736-c028fb27e655/volumes" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.147042 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: E0127 18:11:56.148325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148348 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: E0127 18:11:56.148359 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148366 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148486 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.148502 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb72397-1fbe-4f9d-976a-19ca15b2da2c" containerName="marketplace-operator" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.149368 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.153146 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.154106 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285451 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.285493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.352173 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.353226 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.360264 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.362898 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386781 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386828 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386859 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.386903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.387659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-catalog-content\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.387729 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec7dee3-a9ee-4bb8-b444-899c120854a7-utilities\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.409015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdzkz\" (UniqueName: \"kubernetes.io/projected/1ec7dee3-a9ee-4bb8-b444-899c120854a7-kube-api-access-cdzkz\") pod \"redhat-marketplace-wz7rn\" (UID: \"1ec7dee3-a9ee-4bb8-b444-899c120854a7\") " pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.467863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" event={"ID":"48e5b57d-d01a-441e-beac-ef5e5d74dbc1","Type":"ContainerStarted","Data":"dbb010dcd85aacf9f3dabdbbd2ddbadc6ea10bcdc340a8c793b5232b1a3e3277"} Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.468951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.472417 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.480694 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.490857 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.491651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-catalog-content\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.492645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-utilities\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.515187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt79h\" (UniqueName: \"kubernetes.io/projected/8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee-kube-api-access-vt79h\") pod \"certified-operators-vrcdt\" (UID: \"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee\") " pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.521430 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.521500 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.522225 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podStartSLOduration=3.522203176 podStartE2EDuration="3.522203176s" podCreationTimestamp="2026-01-27 18:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:11:56.494254532 +0000 UTC m=+371.623537174" watchObservedRunningTime="2026-01-27 18:11:56.522203176 +0000 UTC m=+371.651485798" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.676988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:11:56 crc kubenswrapper[4907]: I0127 18:11:56.904431 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz7rn"] Jan 27 18:11:56 crc kubenswrapper[4907]: W0127 18:11:56.907730 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec7dee3_a9ee_4bb8_b444_899c120854a7.slice/crio-647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0 WatchSource:0}: Error finding container 647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0: Status 404 returned error can't find the container with id 647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.104247 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrcdt"] Jan 27 18:11:57 crc kubenswrapper[4907]: W0127 18:11:57.114667 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc8a6bd_6efd_4f2d_89f5_0ceb2441efee.slice/crio-da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f WatchSource:0}: Error finding container da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f: Status 404 returned error can't find the container with id da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487420 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerID="ca90b7d665b9701398c83ce1968de2c0817cf0dde4163aea9d60792056f97329" exitCode=0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487628 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerDied","Data":"ca90b7d665b9701398c83ce1968de2c0817cf0dde4163aea9d60792056f97329"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.487682 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerStarted","Data":"647406ebda174fa932aeb2005559ca7739dd2145bdf073b0c7cbd4b6072d46b0"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491769 4907 generic.go:334] "Generic (PLEG): container finished" podID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerID="27955dbca42b3a8f1a7aff4e83fce49ec3898fcf11a027de65f125acb5a1b02f" exitCode=0 Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491865 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerDied","Data":"27955dbca42b3a8f1a7aff4e83fce49ec3898fcf11a027de65f125acb5a1b02f"} Jan 27 18:11:57 crc kubenswrapper[4907]: I0127 18:11:57.491913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerStarted","Data":"da01613570f2b291a6625fe6e20237e01db310abf9068dbc0bc5ed513fd8d90f"} Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.536748 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.538852 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.542147 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.545438 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622765 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622928 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.622983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724050 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724143 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.724198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.725139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.725168 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.739837 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.741020 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.744402 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.745613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"community-operators-dhv2c\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.750105 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.855664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.927398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.928823 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:58 crc kubenswrapper[4907]: I0127 18:11:58.929062 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030117 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030260 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030289 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030901 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-utilities\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.030975 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-catalog-content\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.051991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7thh\" (UniqueName: \"kubernetes.io/projected/fdf800ed-f5e8-4478-9e7a-98c7c95c7c52-kube-api-access-z7thh\") pod \"redhat-operators-dv4j2\" (UID: \"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52\") " pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.075452 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.084759 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 18:11:59 crc kubenswrapper[4907]: W0127 18:11:59.089421 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae6221e_526b_4cc4_9f9b_1079238c9100.slice/crio-9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5 WatchSource:0}: Error finding container 9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5: Status 404 returned error can't find the container with id 9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.493177 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dv4j2"] Jan 27 18:11:59 crc kubenswrapper[4907]: W0127 18:11:59.499525 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdf800ed_f5e8_4478_9e7a_98c7c95c7c52.slice/crio-c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d WatchSource:0}: Error finding container c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d: Status 404 returned error can't find the container with id c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503837 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4" exitCode=0 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503895 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4"} Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.503922 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerStarted","Data":"9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5"} Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.511673 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerID="e68013733679d8e7f3be167dec106b649367fb7c36522d88ada5adc70676933c" exitCode=0 Jan 27 18:11:59 crc kubenswrapper[4907]: I0127 18:11:59.511707 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerDied","Data":"e68013733679d8e7f3be167dec106b649367fb7c36522d88ada5adc70676933c"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.520372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz7rn" event={"ID":"1ec7dee3-a9ee-4bb8-b444-899c120854a7","Type":"ContainerStarted","Data":"cc7da9e386977f5b19150dca1e503ede1a777a4454d7f9fa64b03a15715fd90e"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522903 4907 generic.go:334] "Generic (PLEG): container finished" podID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerID="b7266f750cde318d9c3f62b27d5f6047c4fea9efc35c551ff041bb8284b03a09" exitCode=0 Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522950 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerDied","Data":"b7266f750cde318d9c3f62b27d5f6047c4fea9efc35c551ff041bb8284b03a09"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.522977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"c4b31891260fcec4d6e184e9789f1f00f75ce21d9587f39f5176915ae765be6d"} Jan 27 18:12:00 crc kubenswrapper[4907]: I0127 18:12:00.537679 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wz7rn" podStartSLOduration=2.060381655 podStartE2EDuration="4.537662497s" podCreationTimestamp="2026-01-27 18:11:56 +0000 UTC" firstStartedPulling="2026-01-27 18:11:57.489734174 +0000 UTC m=+372.619016776" lastFinishedPulling="2026-01-27 18:11:59.967015006 +0000 UTC m=+375.096297618" observedRunningTime="2026-01-27 18:12:00.536524003 +0000 UTC m=+375.665806615" watchObservedRunningTime="2026-01-27 18:12:00.537662497 +0000 UTC m=+375.666945109" Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.530377 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597"} Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.533088 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87" exitCode=0 Jan 27 18:12:01 crc kubenswrapper[4907]: I0127 18:12:01.533146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.541213 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerStarted","Data":"e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.542761 4907 generic.go:334] "Generic (PLEG): container finished" podID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerID="6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597" exitCode=0 Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.542784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerDied","Data":"6ae58e1ef09b4f66b2717c8cf4aadcafcef2699705a58628245ddee799aad597"} Jan 27 18:12:02 crc kubenswrapper[4907]: I0127 18:12:02.560811 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhv2c" podStartSLOduration=2.084805339 podStartE2EDuration="4.560789571s" podCreationTimestamp="2026-01-27 18:11:58 +0000 UTC" firstStartedPulling="2026-01-27 18:11:59.507967295 +0000 UTC m=+374.637249907" lastFinishedPulling="2026-01-27 18:12:01.983951527 +0000 UTC m=+377.113234139" observedRunningTime="2026-01-27 18:12:02.559032679 +0000 UTC m=+377.688315281" watchObservedRunningTime="2026-01-27 18:12:02.560789571 +0000 UTC m=+377.690072183" Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.548868 4907 generic.go:334] "Generic (PLEG): container finished" podID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerID="e6c31016bbde6bf87c1c86e3bdb5686fcee00d9e883d03acd23cb8341dbc91a1" exitCode=0 Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.549328 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerDied","Data":"e6c31016bbde6bf87c1c86e3bdb5686fcee00d9e883d03acd23cb8341dbc91a1"} Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.553618 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dv4j2" event={"ID":"fdf800ed-f5e8-4478-9e7a-98c7c95c7c52","Type":"ContainerStarted","Data":"f0f254763b9ca17af73215d2bb760fbeaf3f219136b79a712b781739bbf3dec8"} Jan 27 18:12:03 crc kubenswrapper[4907]: I0127 18:12:03.593590 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dv4j2" podStartSLOduration=3.173656443 podStartE2EDuration="5.593574383s" podCreationTimestamp="2026-01-27 18:11:58 +0000 UTC" firstStartedPulling="2026-01-27 18:12:00.524225741 +0000 UTC m=+375.653508353" lastFinishedPulling="2026-01-27 18:12:02.944143691 +0000 UTC m=+378.073426293" observedRunningTime="2026-01-27 18:12:03.591343238 +0000 UTC m=+378.720625870" watchObservedRunningTime="2026-01-27 18:12:03.593574383 +0000 UTC m=+378.722856995" Jan 27 18:12:04 crc kubenswrapper[4907]: I0127 18:12:04.560302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrcdt" event={"ID":"8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee","Type":"ContainerStarted","Data":"18d1cc8984ef390f3edf73cb13f16ba0f43d8ca0c9235e5957bdec90d2ad82cb"} Jan 27 18:12:04 crc kubenswrapper[4907]: I0127 18:12:04.585025 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrcdt" podStartSLOduration=1.881814831 podStartE2EDuration="8.585002627s" podCreationTimestamp="2026-01-27 18:11:56 +0000 UTC" firstStartedPulling="2026-01-27 18:11:57.493425173 +0000 UTC m=+372.622707815" lastFinishedPulling="2026-01-27 18:12:04.196612989 +0000 UTC m=+379.325895611" observedRunningTime="2026-01-27 18:12:04.581447252 +0000 UTC m=+379.710729864" watchObservedRunningTime="2026-01-27 18:12:04.585002627 +0000 UTC m=+379.714285239" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.481491 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.481990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.525881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.609120 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wz7rn" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.678128 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.678181 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:06 crc kubenswrapper[4907]: I0127 18:12:06.723197 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.855992 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.859289 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:08 crc kubenswrapper[4907]: I0127 18:12:08.908356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.076402 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.076458 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.118414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.633384 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dv4j2" Jan 27 18:12:09 crc kubenswrapper[4907]: I0127 18:12:09.634971 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 18:12:16 crc kubenswrapper[4907]: I0127 18:12:16.729044 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrcdt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.060094 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.065713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.074107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.074829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075163 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075353 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.075829 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.078630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176012 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.176096 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.276897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.277049 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.277082 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.278908 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1f06e513-6675-48e3-a197-46a4df6eb319-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.297985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f06e513-6675-48e3-a197-46a4df6eb319-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.298050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l44k5\" (UniqueName: \"kubernetes.io/projected/1f06e513-6675-48e3-a197-46a4df6eb319-kube-api-access-l44k5\") pod \"cluster-monitoring-operator-6d5b84845-zd4sq\" (UID: \"1f06e513-6675-48e3-a197-46a4df6eb319\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.407187 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.521105 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.521544 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:12:26 crc kubenswrapper[4907]: I0127 18:12:26.824793 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq"] Jan 27 18:12:27 crc kubenswrapper[4907]: I0127 18:12:27.689212 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" event={"ID":"1f06e513-6675-48e3-a197-46a4df6eb319","Type":"ContainerStarted","Data":"b75c64b04339409bf659d0d6090c83d07ef71d5c949ecc4c3f5e577280edd415"} Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.240337 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.241907 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.259728 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.346669 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.347404 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.349239 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-tmmcx" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.349255 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.362040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422766 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422948 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.422997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.423080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.446054 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524494 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524695 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.524781 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.525161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21da9305-e6ab-4378-b316-7a3ffc47faa0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.526356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-certificates\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.526356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21da9305-e6ab-4378-b316-7a3ffc47faa0-trusted-ca\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.531417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21da9305-e6ab-4378-b316-7a3ffc47faa0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.531988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-registry-tls\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.543551 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzvsb\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-kube-api-access-qzvsb\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.548984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21da9305-e6ab-4378-b316-7a3ffc47faa0-bound-sa-token\") pod \"image-registry-66df7c8f76-fqkck\" (UID: \"21da9305-e6ab-4378-b316-7a3ffc47faa0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.555909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.626169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.629849 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/dccc085e-3aae-4c8e-8737-699c60063730-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2fplf\" (UID: \"dccc085e-3aae-4c8e-8737-699c60063730\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.661313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.703286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" event={"ID":"1f06e513-6675-48e3-a197-46a4df6eb319","Type":"ContainerStarted","Data":"eae8c4368e113dc5cfc98ba0fdba645e199a4c98b9caa3fda35c3328edd06594"} Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.725764 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-zd4sq" podStartSLOduration=1.863310056 podStartE2EDuration="3.725740501s" podCreationTimestamp="2026-01-27 18:12:26 +0000 UTC" firstStartedPulling="2026-01-27 18:12:26.835891505 +0000 UTC m=+401.965174127" lastFinishedPulling="2026-01-27 18:12:28.69832196 +0000 UTC m=+403.827604572" observedRunningTime="2026-01-27 18:12:29.720451816 +0000 UTC m=+404.849734438" watchObservedRunningTime="2026-01-27 18:12:29.725740501 +0000 UTC m=+404.855023113" Jan 27 18:12:29 crc kubenswrapper[4907]: I0127 18:12:29.984333 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fqkck"] Jan 27 18:12:29 crc kubenswrapper[4907]: W0127 18:12:29.992994 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21da9305_e6ab_4378_b316_7a3ffc47faa0.slice/crio-c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520 WatchSource:0}: Error finding container c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520: Status 404 returned error can't find the container with id c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520 Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.145490 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf"] Jan 27 18:12:30 crc kubenswrapper[4907]: W0127 18:12:30.150758 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddccc085e_3aae_4c8e_8737_699c60063730.slice/crio-7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171 WatchSource:0}: Error finding container 7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171: Status 404 returned error can't find the container with id 7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171 Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.716681 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" event={"ID":"21da9305-e6ab-4378-b316-7a3ffc47faa0","Type":"ContainerStarted","Data":"934cf5e1d8502686f370ff0045b6810381dc76e84b21ec8bf75fd4935698431a"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.716779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" event={"ID":"21da9305-e6ab-4378-b316-7a3ffc47faa0","Type":"ContainerStarted","Data":"c5c7db16e5367c27934b40f42045f1b752121a35ccb3e0cd271251627eb3c520"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.719040 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.719235 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" event={"ID":"dccc085e-3aae-4c8e-8737-699c60063730","Type":"ContainerStarted","Data":"7568028391bf61bb02a1dfda89653a61268389eac45d6366a80c6ed5488d8171"} Jan 27 18:12:30 crc kubenswrapper[4907]: I0127 18:12:30.753464 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podStartSLOduration=1.7534291419999999 podStartE2EDuration="1.753429142s" podCreationTimestamp="2026-01-27 18:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:12:30.749466355 +0000 UTC m=+405.878749037" watchObservedRunningTime="2026-01-27 18:12:30.753429142 +0000 UTC m=+405.882711794" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.731777 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" event={"ID":"dccc085e-3aae-4c8e-8737-699c60063730","Type":"ContainerStarted","Data":"2297d43ccd2b3db8afba28eb41fd1f0131b6efd680916504deeef3e0cb335554"} Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.732258 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.745656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" Jan 27 18:12:32 crc kubenswrapper[4907]: I0127 18:12:32.750866 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podStartSLOduration=2.265469189 podStartE2EDuration="3.750803902s" podCreationTimestamp="2026-01-27 18:12:29 +0000 UTC" firstStartedPulling="2026-01-27 18:12:30.154840998 +0000 UTC m=+405.284123610" lastFinishedPulling="2026-01-27 18:12:31.640175711 +0000 UTC m=+406.769458323" observedRunningTime="2026-01-27 18:12:32.750292437 +0000 UTC m=+407.879575089" watchObservedRunningTime="2026-01-27 18:12:32.750803902 +0000 UTC m=+407.880086544" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.456592 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.457960 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.461199 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.461978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.462158 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.462397 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-4v6tr" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.468945 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.611938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612014 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612216 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.612298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713695 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.713756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.714995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ccd008a1-f6c9-49ea-8b56-893754445191-metrics-client-ca\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.721296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.721830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd008a1-f6c9-49ea-8b56-893754445191-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.739297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zszkb\" (UniqueName: \"kubernetes.io/projected/ccd008a1-f6c9-49ea-8b56-893754445191-kube-api-access-zszkb\") pod \"prometheus-operator-db54df47d-256m4\" (UID: \"ccd008a1-f6c9-49ea-8b56-893754445191\") " pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:33 crc kubenswrapper[4907]: I0127 18:12:33.786741 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" Jan 27 18:12:34 crc kubenswrapper[4907]: I0127 18:12:34.193597 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-256m4"] Jan 27 18:12:34 crc kubenswrapper[4907]: W0127 18:12:34.205138 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccd008a1_f6c9_49ea_8b56_893754445191.slice/crio-c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea WatchSource:0}: Error finding container c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea: Status 404 returned error can't find the container with id c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea Jan 27 18:12:34 crc kubenswrapper[4907]: I0127 18:12:34.746657 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"c51e27a127f00bcd669ab9b3a57994fa1aff4cde33f0be19d206aff73fdae1ea"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.777388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"08a72d1dd6e8154c2406e8b35441938326365f0a0ffa8e3d46afaa2499c200fa"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.778429 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" event={"ID":"ccd008a1-f6c9-49ea-8b56-893754445191","Type":"ContainerStarted","Data":"81fc3fc75dbf697385c9108d4812174a5704a4363b5a2fc6add7df7a4a141e0a"} Jan 27 18:12:36 crc kubenswrapper[4907]: I0127 18:12:36.804165 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-256m4" podStartSLOduration=2.31659018 podStartE2EDuration="3.804136728s" podCreationTimestamp="2026-01-27 18:12:33 +0000 UTC" firstStartedPulling="2026-01-27 18:12:34.207386432 +0000 UTC m=+409.336669054" lastFinishedPulling="2026-01-27 18:12:35.69493299 +0000 UTC m=+410.824215602" observedRunningTime="2026-01-27 18:12:36.800396689 +0000 UTC m=+411.929679341" watchObservedRunningTime="2026-01-27 18:12:36.804136728 +0000 UTC m=+411.933419380" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.791980 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.794157 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.840420 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.840593 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.841399 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-q6vpk" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.845115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.880713 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.892083 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.893669 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894547 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894749 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894892 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.894956 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.895786 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-2pnkk" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.904945 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-nln57"] Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.908748 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.913691 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9qfxp" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.914353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.914418 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.996985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997174 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997222 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997256 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997512 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.997971 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998037 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998096 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:38 crc kubenswrapper[4907]: I0127 18:12:38.998830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3227578e-bf46-482d-bc81-33cf9f5e45e9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.003958 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.013319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3227578e-bf46-482d-bc81-33cf9f5e45e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.028395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k98s9\" (UniqueName: \"kubernetes.io/projected/3227578e-bf46-482d-bc81-33cf9f5e45e9-kube-api-access-k98s9\") pod \"openshift-state-metrics-566fddb674-fk6x7\" (UID: \"3227578e-bf46-482d-bc81-33cf9f5e45e9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099777 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099807 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099832 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099908 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099956 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.099982 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100022 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100059 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.100082 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-root\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-sys\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-wtmp\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: E0127 18:12:39.101393 4907 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 27 18:12:39 crc kubenswrapper[4907]: E0127 18:12:39.101500 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls podName:2b64919b-b6e7-4cc9-a40a-a22ac0022126 nodeName:}" failed. No retries permitted until 2026-01-27 18:12:39.601468351 +0000 UTC m=+414.730751023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls") pod "node-exporter-nln57" (UID: "2b64919b-b6e7-4cc9-a40a-a22ac0022126") : secret "node-exporter-tls" not found Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.101773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/109f2f0b-779e-4070-842b-eb81187fb12a-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102004 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-textfile\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.102394 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b64919b-b6e7-4cc9-a40a-a22ac0022126-metrics-client-ca\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.103128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/109f2f0b-779e-4070-842b-eb81187fb12a-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.104355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.104900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/109f2f0b-779e-4070-842b-eb81187fb12a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.112108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.120891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wg4b\" (UniqueName: \"kubernetes.io/projected/2b64919b-b6e7-4cc9-a40a-a22ac0022126-kube-api-access-8wg4b\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.128260 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpc7x\" (UniqueName: \"kubernetes.io/projected/109f2f0b-779e-4070-842b-eb81187fb12a-kube-api-access-mpc7x\") pod \"kube-state-metrics-777cb5bd5d-p8dvr\" (UID: \"109f2f0b-779e-4070-842b-eb81187fb12a\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.156685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.219600 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.569620 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7"] Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.575267 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3227578e_bf46_482d_bc81_33cf9f5e45e9.slice/crio-eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a WatchSource:0}: Error finding container eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a: Status 404 returned error can't find the container with id eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.605651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.610375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2b64919b-b6e7-4cc9-a40a-a22ac0022126-node-exporter-tls\") pod \"node-exporter-nln57\" (UID: \"2b64919b-b6e7-4cc9-a40a-a22ac0022126\") " pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.663327 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr"] Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.667981 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod109f2f0b_779e_4070_842b_eb81187fb12a.slice/crio-94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa WatchSource:0}: Error finding container 94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa: Status 404 returned error can't find the container with id 94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.803943 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"94bcf433f82a43f377bee8c2e3c3a727966bd07cf5f61a9ad6614e20523956aa"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.805689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"016d393dc785c09f48b7b5f0119a98cb6f2a5b8ce44ba848cab9804651fee169"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.805713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"eb5bc4b2e3dd2213fabbfc46f2c0a9282b703ce2f8f287da95dbe387973a686a"} Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.829747 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nln57" Jan 27 18:12:39 crc kubenswrapper[4907]: W0127 18:12:39.862670 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b64919b_b6e7_4cc9_a40a_a22ac0022126.slice/crio-9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e WatchSource:0}: Error finding container 9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e: Status 404 returned error can't find the container with id 9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.925215 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.927397 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.929957 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.930480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931266 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931445 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.931828 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932432 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-dxz4f" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932592 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.932739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.939062 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 27 18:12:39 crc kubenswrapper[4907]: I0127 18:12:39.958199 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014140 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014286 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014323 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014347 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014382 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.014404 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115465 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115536 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115637 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115670 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115707 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115749 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115768 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.115822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.116082 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.116879 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.117737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.121339 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-out\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.121958 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.122311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.125738 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-config-volume\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.130161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.137065 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.137243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-web-config\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.150366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.156126 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mqj\" (UniqueName: \"kubernetes.io/projected/3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2-kube-api-access-c6mqj\") pod \"alertmanager-main-0\" (UID: \"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.252176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.704635 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 18:12:40 crc kubenswrapper[4907]: W0127 18:12:40.715879 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3797d1fb_c1cc_4ee9_8a90_7f26906ce9b2.slice/crio-8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793 WatchSource:0}: Error finding container 8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793: Status 404 returned error can't find the container with id 8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793 Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.815884 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"9473c7e6ca38a2d926fa16dcc9b660e365e87cfa1fbd8d7b413352d1f5b8f080"} Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.817543 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"9063fe44ec4a6943af7057ab92d8ce423ba009b7542cfef1275c3dc89760151e"} Jan 27 18:12:40 crc kubenswrapper[4907]: I0127 18:12:40.818806 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"8d2d5a6d1242c036db8a4687c8cd732705c62916f2f9005cd1d3c30cf7189793"} Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.899819 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.902162 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907314 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4r7ll" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907551 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907825 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bv9306prcgika" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.907960 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908102 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908277 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.908405 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 27 18:12:41 crc kubenswrapper[4907]: I0127 18:12:41.924318 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.048977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049602 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.049690 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150451 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150505 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150576 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150597 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150662 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.150706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.152707 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-metrics-client-ca\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170209 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.170667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.171964 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-grpc-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-tls\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173312 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.173704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nr84\" (UniqueName: \"kubernetes.io/projected/8e0f501d-4ce7-4268-b84c-71e7a8a1b430-kube-api-access-4nr84\") pod \"thanos-querier-c9f8b8df8-2gbm9\" (UID: \"8e0f501d-4ce7-4268-b84c-71e7a8a1b430\") " pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.311929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.718134 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9"] Jan 27 18:12:42 crc kubenswrapper[4907]: W0127 18:12:42.777164 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0f501d_4ce7_4268_b84c_71e7a8a1b430.slice/crio-6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b WatchSource:0}: Error finding container 6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b: Status 404 returned error can't find the container with id 6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.834494 4907 generic.go:334] "Generic (PLEG): container finished" podID="2b64919b-b6e7-4cc9-a40a-a22ac0022126" containerID="c8aa0ff9df88cc288ddd843ef0af1dfb14ce08b2f8404a7f5ef883b301c43d13" exitCode=0 Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.834581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerDied","Data":"c8aa0ff9df88cc288ddd843ef0af1dfb14ce08b2f8404a7f5ef883b301c43d13"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.836714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"6a211d5b548f14b99bc4bd6d38fe2a2793fb1a6ea395e6f31735dc585e40064b"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839908 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"0e23e36f48a50c2f1d580245a787292935d13f4bb4d433040a6221a645e2a7e0"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"55997bbef9b29ca17253903a775af415c0e3c86ade9a43d98530733be6b1f4d4"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.839989 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" event={"ID":"109f2f0b-779e-4070-842b-eb81187fb12a","Type":"ContainerStarted","Data":"883f52da2c95e11b3b1cab303073666f928e30575ea0fb2adbb5306f48bb64f3"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.841966 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" event={"ID":"3227578e-bf46-482d-bc81-33cf9f5e45e9","Type":"ContainerStarted","Data":"95c105599a07d32955429425e0d2619c332b14f23ed4aaa8a44830a4ae31b4be"} Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.879666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p8dvr" podStartSLOduration=2.733910545 podStartE2EDuration="4.879632803s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.671006283 +0000 UTC m=+414.800288895" lastFinishedPulling="2026-01-27 18:12:41.816728541 +0000 UTC m=+416.946011153" observedRunningTime="2026-01-27 18:12:42.874582575 +0000 UTC m=+418.003865207" watchObservedRunningTime="2026-01-27 18:12:42.879632803 +0000 UTC m=+418.008915415" Jan 27 18:12:42 crc kubenswrapper[4907]: I0127 18:12:42.896805 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-fk6x7" podStartSLOduration=2.95106769 podStartE2EDuration="4.896783606s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.883603574 +0000 UTC m=+415.012886186" lastFinishedPulling="2026-01-27 18:12:41.82931949 +0000 UTC m=+416.958602102" observedRunningTime="2026-01-27 18:12:42.889660597 +0000 UTC m=+418.018943209" watchObservedRunningTime="2026-01-27 18:12:42.896783606 +0000 UTC m=+418.026066218" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.619422 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.620816 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.633173 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675898 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675956 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.675976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676006 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676227 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676279 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.676376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.777907 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.777974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778039 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778109 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.778151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779032 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779346 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779721 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.779878 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.791212 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.791268 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.801976 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"console-7cc8bd7b4-59b72\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.852096 4907 generic.go:334] "Generic (PLEG): container finished" podID="3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2" containerID="097827d617ad4ae0c94e4ccbd283e7db0f702edbe88460a91fcdaf07777118bb" exitCode=0 Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.852181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerDied","Data":"097827d617ad4ae0c94e4ccbd283e7db0f702edbe88460a91fcdaf07777118bb"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.863106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"b3301bc3c3088b9f7bd4f5c54730f86c7ee33e1f528931fddeedfafb364a3976"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.863154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nln57" event={"ID":"2b64919b-b6e7-4cc9-a40a-a22ac0022126","Type":"ContainerStarted","Data":"695bf4485cfe4386e544433043e145ed5c5f9b6e9e45819065bfbd668646e2c6"} Jan 27 18:12:43 crc kubenswrapper[4907]: I0127 18:12:43.940752 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.026209 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-nln57" podStartSLOduration=4.0647906 podStartE2EDuration="6.026182296s" podCreationTimestamp="2026-01-27 18:12:38 +0000 UTC" firstStartedPulling="2026-01-27 18:12:39.864775172 +0000 UTC m=+414.994057784" lastFinishedPulling="2026-01-27 18:12:41.826166838 +0000 UTC m=+416.955449480" observedRunningTime="2026-01-27 18:12:43.908156497 +0000 UTC m=+419.037439109" watchObservedRunningTime="2026-01-27 18:12:44.026182296 +0000 UTC m=+419.155464908" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.027898 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.028838 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.030840 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.031044 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wdtxr" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.031256 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032495 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-bo2gg3s7etg0k" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032851 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.032931 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.037264 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.089632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090111 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090270 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.090424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191446 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191569 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191656 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.191677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.192871 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.193204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-audit-log\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.194396 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-metrics-server-audit-profiles\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.197665 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-client-certs\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.198340 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-secret-metrics-server-tls\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.201724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-client-ca-bundle\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.206984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4drc\" (UniqueName: \"kubernetes.io/projected/562a795f-c556-42b2-a9a3-0baf8b3ce4c5-kube-api-access-j4drc\") pod \"metrics-server-7f448b7857-l4vhw\" (UID: \"562a795f-c556-42b2-a9a3-0baf8b3ce4c5\") " pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.356480 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.363929 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:12:44 crc kubenswrapper[4907]: W0127 18:12:44.376133 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbb41873_fa83_4786_b31d_d0d3ebeb902b.slice/crio-b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954 WatchSource:0}: Error finding container b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954: Status 404 returned error can't find the container with id b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954 Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.607044 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.608204 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.615319 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.615568 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.634720 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.724388 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.826342 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.845966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3e1c70a-dd32-4bc6-b7ec-6ec039441440-monitoring-plugin-cert\") pod \"monitoring-plugin-6596df577b-flw67\" (UID: \"c3e1c70a-dd32-4bc6-b7ec-6ec039441440\") " pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.873848 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7f448b7857-l4vhw"] Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.874324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerStarted","Data":"b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954"} Jan 27 18:12:44 crc kubenswrapper[4907]: I0127 18:12:44.935447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.109609 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.111513 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115886 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115953 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.115892 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-fj9lk" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116045 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116149 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116601 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.116827 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.118847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119097 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38cbjc522925s" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119231 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.119392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.122913 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.125881 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.140418 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236292 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236407 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236793 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236840 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236858 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.236978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237031 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237150 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237182 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.237200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.338940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.338997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339013 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339734 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339800 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339829 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339924 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.339962 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340021 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340064 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340095 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340122 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340300 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.340941 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.341000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.342829 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343620 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343759 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.343875 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.344653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-web-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.344814 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.345837 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.346190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adac6b31-6901-4af8-bc21-648d56318021-config-out\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.348319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-config\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.348775 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.351598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.352979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adac6b31-6901-4af8-bc21-648d56318021-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.354219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/adac6b31-6901-4af8-bc21-648d56318021-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.363888 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhtl\" (UniqueName: \"kubernetes.io/projected/adac6b31-6901-4af8-bc21-648d56318021-kube-api-access-nkhtl\") pod \"prometheus-k8s-0\" (UID: \"adac6b31-6901-4af8-bc21-648d56318021\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.431935 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:45 crc kubenswrapper[4907]: W0127 18:12:45.581176 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod562a795f_c556_42b2_a9a3_0baf8b3ce4c5.slice/crio-bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7 WatchSource:0}: Error finding container bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7: Status 404 returned error can't find the container with id bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7 Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.879550 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerStarted","Data":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.881836 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"bf4d47879e01ef398b326b809cf3bbbb2609b21b0286e1ee5137d3c5a929bea7"} Jan 27 18:12:45 crc kubenswrapper[4907]: I0127 18:12:45.911356 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cc8bd7b4-59b72" podStartSLOduration=2.911338498 podStartE2EDuration="2.911338498s" podCreationTimestamp="2026-01-27 18:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:12:45.902910811 +0000 UTC m=+421.032193453" watchObservedRunningTime="2026-01-27 18:12:45.911338498 +0000 UTC m=+421.040621110" Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.631726 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6596df577b-flw67"] Jan 27 18:12:47 crc kubenswrapper[4907]: W0127 18:12:46.642388 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e1c70a_dd32_4bc6_b7ec_6ec039441440.slice/crio-e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616 WatchSource:0}: Error finding container e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616: Status 404 returned error can't find the container with id e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616 Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.700599 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 18:12:47 crc kubenswrapper[4907]: W0127 18:12:46.708634 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadac6b31_6901_4af8_bc21_648d56318021.slice/crio-63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a WatchSource:0}: Error finding container 63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a: Status 404 returned error can't find the container with id 63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.888959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"63d8c536ff58b61feea587dcb5fac13bcc346aade27eab9961906a75c6e39d9a"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"86d8aca05cc94e034fdff91a725c05e14cdeb6fa45a159e588121c06df631aa4"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"e57238a4dc0b2bf79560cd3ffc972cbfa338354c5cec19a0c81fcbb5500021a2"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.891292 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"5c81680fb82ab7c135ed6a153161c9b9474d5720de0fdea0a5722a44d0eb1c1d"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893685 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"e3ad205c23867f185c2e09b4e8c6533cfc4d08ccdd7ef6d843d4705904d1f7bc"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893716 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"b37e225d07feea62e890d4bbb3defe3271e136435ecbd7291aa1b68d717cfce2"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.893727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"580714ab4bc2ccd8a2e2c1ee20846bece04c1324e80977af2b1e74a967743427"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:46.895839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" event={"ID":"c3e1c70a-dd32-4bc6-b7ec-6ec039441440","Type":"ContainerStarted","Data":"e9f35f6712fb1432fe6dc0baec46b524148333df0f7b5e88564904346ea53616"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.903938 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.908883 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"6c6648fc73dd44026854491676f0287794c631aaa5473d5f10b9dc2d38387ee5"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.908921 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"bd555f91fe21e2df39af47867452d32cffd382d952c9ba6793be28d5c0880d7d"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.910322 4907 generic.go:334] "Generic (PLEG): container finished" podID="adac6b31-6901-4af8-bc21-648d56318021" containerID="02ccd677bcd49803979431c60b8f1e6b7bf742c20502fcaf097fbad7c4954043" exitCode=0 Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.910354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerDied","Data":"02ccd677bcd49803979431c60b8f1e6b7bf742c20502fcaf097fbad7c4954043"} Jan 27 18:12:47 crc kubenswrapper[4907]: I0127 18:12:47.929366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podStartSLOduration=2.000238054 podStartE2EDuration="3.929348133s" podCreationTimestamp="2026-01-27 18:12:44 +0000 UTC" firstStartedPulling="2026-01-27 18:12:45.584471348 +0000 UTC m=+420.713753960" lastFinishedPulling="2026-01-27 18:12:47.513581427 +0000 UTC m=+422.642864039" observedRunningTime="2026-01-27 18:12:47.925521931 +0000 UTC m=+423.054804533" watchObservedRunningTime="2026-01-27 18:12:47.929348133 +0000 UTC m=+423.058630745" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.923952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"5c8da3d7b99c2462d65efd5ce25dba6c9a9704d349cc448416d200e5c82f8f70"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.924022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"d78684e2f24815421491d90d7518e0c87348b62d375d0c1b71f809f76bced033"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.924042 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" event={"ID":"8e0f501d-4ce7-4268-b84c-71e7a8a1b430","Type":"ContainerStarted","Data":"19f9c8ab6f2eacc00cf5c489539bb71163bd6f8fcc4e369835d6f113d2e813fd"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.925526 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.927422 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" event={"ID":"c3e1c70a-dd32-4bc6-b7ec-6ec039441440","Type":"ContainerStarted","Data":"aac7a9fe1993ca66ad15cfca52536cf84b72c871cd832f1e6ff443b5ba4b645e"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.927917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.932899 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3797d1fb-c1cc-4ee9-8a90-7f26906ce9b2","Type":"ContainerStarted","Data":"f8f651c635638588e16f63d265adea2c5991e04bb1d17d182ece1adbfd43a08a"} Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.936854 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.977415 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podStartSLOduration=2.69903117 podStartE2EDuration="7.97738414s" podCreationTimestamp="2026-01-27 18:12:41 +0000 UTC" firstStartedPulling="2026-01-27 18:12:42.779087137 +0000 UTC m=+417.908369749" lastFinishedPulling="2026-01-27 18:12:48.057440097 +0000 UTC m=+423.186722719" observedRunningTime="2026-01-27 18:12:48.951234683 +0000 UTC m=+424.080517285" watchObservedRunningTime="2026-01-27 18:12:48.97738414 +0000 UTC m=+424.106666762" Jan 27 18:12:48 crc kubenswrapper[4907]: I0127 18:12:48.978361 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podStartSLOduration=3.081030531 podStartE2EDuration="4.978355388s" podCreationTimestamp="2026-01-27 18:12:44 +0000 UTC" firstStartedPulling="2026-01-27 18:12:46.653386217 +0000 UTC m=+421.782668829" lastFinishedPulling="2026-01-27 18:12:48.550711074 +0000 UTC m=+423.679993686" observedRunningTime="2026-01-27 18:12:48.970391825 +0000 UTC m=+424.099674437" watchObservedRunningTime="2026-01-27 18:12:48.978355388 +0000 UTC m=+424.107638010" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.019846 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.68465797 podStartE2EDuration="10.019819443s" podCreationTimestamp="2026-01-27 18:12:39 +0000 UTC" firstStartedPulling="2026-01-27 18:12:40.719441511 +0000 UTC m=+415.848724123" lastFinishedPulling="2026-01-27 18:12:48.054602994 +0000 UTC m=+423.183885596" observedRunningTime="2026-01-27 18:12:49.002101814 +0000 UTC m=+424.131384436" watchObservedRunningTime="2026-01-27 18:12:49.019819443 +0000 UTC m=+424.149102055" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.564164 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" Jan 27 18:12:49 crc kubenswrapper[4907]: I0127 18:12:49.620009 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:12:51 crc kubenswrapper[4907]: I0127 18:12:51.957951 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"bab7ad455d7bcf8b8025daa37f484260be75fe10219e39d69fc8ef2d0dbd2fce"} Jan 27 18:12:51 crc kubenswrapper[4907]: I0127 18:12:51.958721 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"35244a64ccb45d6463063a0944dd1016d6de399355b4f19dea17b96a6ad3cce6"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.506310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969535 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"4c07ce8aa6ee6b5b4fdaf35a5bdd25f70ebdb0d3860364128e7441387a136da3"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"a697b53b00cc7778188e6ab6a600c50f8590251dc7c18cca7a4e3664161240d3"} Jan 27 18:12:52 crc kubenswrapper[4907]: I0127 18:12:52.969677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"a7fdd7a2b3898b5812ae7617f00929a7e55cf645a4b616af41291eb13219c945"} Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.942239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.942291 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.947365 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.981469 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"adac6b31-6901-4af8-bc21-648d56318021","Type":"ContainerStarted","Data":"6551ae61669b66c95152146db91aefaba49601583377f5e60a60e80a5da520e3"} Jan 27 18:12:53 crc kubenswrapper[4907]: I0127 18:12:53.984852 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:12:54 crc kubenswrapper[4907]: I0127 18:12:54.019383 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.371568521 podStartE2EDuration="9.019211939s" podCreationTimestamp="2026-01-27 18:12:45 +0000 UTC" firstStartedPulling="2026-01-27 18:12:47.913357524 +0000 UTC m=+423.042640136" lastFinishedPulling="2026-01-27 18:12:51.561000932 +0000 UTC m=+426.690283554" observedRunningTime="2026-01-27 18:12:54.014850571 +0000 UTC m=+429.144133203" watchObservedRunningTime="2026-01-27 18:12:54.019211939 +0000 UTC m=+429.148494561" Jan 27 18:12:54 crc kubenswrapper[4907]: I0127 18:12:54.078129 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:12:55 crc kubenswrapper[4907]: I0127 18:12:55.432881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522158 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522279 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.522369 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.523458 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:12:56 crc kubenswrapper[4907]: I0127 18:12:56.523643 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" gracePeriod=600 Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007149 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" exitCode=0 Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39"} Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007488 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} Jan 27 18:12:57 crc kubenswrapper[4907]: I0127 18:12:57.007516 4907 scope.go:117] "RemoveContainer" containerID="f41a1b196bd48fce2b5bf24e525fc5c905e44530f25a37f92ca797c66d0b778e" Jan 27 18:13:04 crc kubenswrapper[4907]: I0127 18:13:04.358051 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:04 crc kubenswrapper[4907]: I0127 18:13:04.358840 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:14 crc kubenswrapper[4907]: I0127 18:13:14.665313 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" containerID="cri-o://7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" gracePeriod=30 Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.172743 4907 generic.go:334] "Generic (PLEG): container finished" podID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerID="7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" exitCode=0 Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.172907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerDied","Data":"7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4"} Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.173216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" event={"ID":"c85caecd-2eec-479e-82a3-2ac3c53c79c6","Type":"ContainerDied","Data":"c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1"} Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.173242 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5aa828cd072604ed1f906a58b65bc98f6dfd27675da5071e1386f563dc177a1" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.193447 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213815 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213849 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213890 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213926 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.213955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.214001 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.214029 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") pod \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\" (UID: \"c85caecd-2eec-479e-82a3-2ac3c53c79c6\") " Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.216416 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.229213 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.229627 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.231543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88" (OuterVolumeSpecName: "kube-api-access-b9t88") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "kube-api-access-b9t88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.245659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.246075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.250975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.264211 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c85caecd-2eec-479e-82a3-2ac3c53c79c6" (UID: "c85caecd-2eec-479e-82a3-2ac3c53c79c6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.316761 4907 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317116 4907 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c85caecd-2eec-479e-82a3-2ac3c53c79c6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317129 4907 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c85caecd-2eec-479e-82a3-2ac3c53c79c6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317141 4907 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317153 4907 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317165 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9t88\" (UniqueName: \"kubernetes.io/projected/c85caecd-2eec-479e-82a3-2ac3c53c79c6-kube-api-access-b9t88\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:15 crc kubenswrapper[4907]: I0127 18:13:15.317177 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c85caecd-2eec-479e-82a3-2ac3c53c79c6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.180062 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wwg9f" Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.207097 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:13:16 crc kubenswrapper[4907]: I0127 18:13:16.215538 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wwg9f"] Jan 27 18:13:17 crc kubenswrapper[4907]: I0127 18:13:17.763473 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" path="/var/lib/kubelet/pods/c85caecd-2eec-479e-82a3-2ac3c53c79c6/volumes" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.153071 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-grwdr" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" containerID="cri-o://a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" gracePeriod=15 Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.583410 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-grwdr_c40070fe-7a8d-4f73-ad68-7e0a36680906/console/0.log" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.583728 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688540 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688587 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688663 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688688 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688713 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.688761 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") pod \"c40070fe-7a8d-4f73-ad68-7e0a36680906\" (UID: \"c40070fe-7a8d-4f73-ad68-7e0a36680906\") " Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.689743 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config" (OuterVolumeSpecName: "console-config") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.689759 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.690255 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.690372 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca" (OuterVolumeSpecName: "service-ca") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695120 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695360 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.695500 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd" (OuterVolumeSpecName: "kube-api-access-m25qd") pod "c40070fe-7a8d-4f73-ad68-7e0a36680906" (UID: "c40070fe-7a8d-4f73-ad68-7e0a36680906"). InnerVolumeSpecName "kube-api-access-m25qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790418 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790907 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790918 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c40070fe-7a8d-4f73-ad68-7e0a36680906-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790927 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790935 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790945 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25qd\" (UniqueName: \"kubernetes.io/projected/c40070fe-7a8d-4f73-ad68-7e0a36680906-kube-api-access-m25qd\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:19 crc kubenswrapper[4907]: I0127 18:13:19.790955 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c40070fe-7a8d-4f73-ad68-7e0a36680906-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213130 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-grwdr_c40070fe-7a8d-4f73-ad68-7e0a36680906/console/0.log" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213198 4907 generic.go:334] "Generic (PLEG): container finished" podID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" exitCode=2 Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213239 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerDied","Data":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-grwdr" event={"ID":"c40070fe-7a8d-4f73-ad68-7e0a36680906","Type":"ContainerDied","Data":"4f4e687b11dd2ca7eb21e3c540aa81cbaa9c488161aa4b888533995942e8fa1a"} Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213297 4907 scope.go:117] "RemoveContainer" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.213344 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-grwdr" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.241239 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.246932 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-grwdr"] Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.250083 4907 scope.go:117] "RemoveContainer" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: E0127 18:13:20.250640 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": container with ID starting with a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43 not found: ID does not exist" containerID="a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43" Jan 27 18:13:20 crc kubenswrapper[4907]: I0127 18:13:20.250686 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43"} err="failed to get container status \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": rpc error: code = NotFound desc = could not find container \"a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43\": container with ID starting with a9437836e13a9310ce3bb1c674d99f584d4b10df5c58c244341db77b0fe6ab43 not found: ID does not exist" Jan 27 18:13:21 crc kubenswrapper[4907]: I0127 18:13:21.760492 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" path="/var/lib/kubelet/pods/c40070fe-7a8d-4f73-ad68-7e0a36680906/volumes" Jan 27 18:13:24 crc kubenswrapper[4907]: I0127 18:13:24.364796 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:24 crc kubenswrapper[4907]: I0127 18:13:24.370216 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 18:13:45 crc kubenswrapper[4907]: I0127 18:13:45.432620 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:13:45 crc kubenswrapper[4907]: I0127 18:13:45.483302 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:13:46 crc kubenswrapper[4907]: I0127 18:13:46.481235 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.670104 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:28 crc kubenswrapper[4907]: E0127 18:14:28.671306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671323 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: E0127 18:14:28.671346 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671355 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671493 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40070fe-7a8d-4f73-ad68-7e0a36680906" containerName="console" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.671513 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85caecd-2eec-479e-82a3-2ac3c53c79c6" containerName="registry" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.672086 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.688037 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712481 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712809 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.712982 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.813879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.814792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.814956 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815047 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815153 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.815659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.816483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.820830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.824504 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.832319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"console-6b447cd8-v5z5k\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:28 crc kubenswrapper[4907]: I0127 18:14:28.991087 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.297197 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.971799 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerStarted","Data":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} Jan 27 18:14:29 crc kubenswrapper[4907]: I0127 18:14:29.971860 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerStarted","Data":"d22d0be7c5012debcbe1ac6b1b934a7244865eb06d8f858be9fb3384ddfdb6a5"} Jan 27 18:14:30 crc kubenswrapper[4907]: I0127 18:14:30.003545 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b447cd8-v5z5k" podStartSLOduration=2.003512742 podStartE2EDuration="2.003512742s" podCreationTimestamp="2026-01-27 18:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:14:29.999792593 +0000 UTC m=+525.129075235" watchObservedRunningTime="2026-01-27 18:14:30.003512742 +0000 UTC m=+525.132795384" Jan 27 18:14:38 crc kubenswrapper[4907]: I0127 18:14:38.992246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:38 crc kubenswrapper[4907]: I0127 18:14:38.993728 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.000224 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.057585 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:14:39 crc kubenswrapper[4907]: I0127 18:14:39.149197 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:14:46 crc kubenswrapper[4907]: I0127 18:14:46.312882 4907 scope.go:117] "RemoveContainer" containerID="dd24dd32da263b7052a82f6c2b680b2979832173d139168c7d6b2bbf5b442718" Jan 27 18:14:46 crc kubenswrapper[4907]: I0127 18:14:46.350515 4907 scope.go:117] "RemoveContainer" containerID="7c3df456d26b3f55c0c3f0e8e6da999cbc7ad2995bbe95328324c900796cdcc4" Jan 27 18:14:56 crc kubenswrapper[4907]: I0127 18:14:56.521272 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:14:56 crc kubenswrapper[4907]: I0127 18:14:56.522168 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.208260 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.209667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.216180 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.216350 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.235008 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.357954 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.358044 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.358210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459538 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.459617 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.461276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.467755 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.477646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"collect-profiles-29492295-hbgsp\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.535645 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:00 crc kubenswrapper[4907]: I0127 18:15:00.784886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.226664 4907 generic.go:334] "Generic (PLEG): container finished" podID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerID="18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191" exitCode=0 Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.226741 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerDied","Data":"18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191"} Jan 27 18:15:01 crc kubenswrapper[4907]: I0127 18:15:01.227084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerStarted","Data":"b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6"} Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.522646 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689003 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689094 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.689204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") pod \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\" (UID: \"98eb00a2-9da3-459d-b011-7d92bcd6ed21\") " Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.690586 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume" (OuterVolumeSpecName: "config-volume") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.695165 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d" (OuterVolumeSpecName: "kube-api-access-rfb7d") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "kube-api-access-rfb7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.695473 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98eb00a2-9da3-459d-b011-7d92bcd6ed21" (UID: "98eb00a2-9da3-459d-b011-7d92bcd6ed21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.790965 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfb7d\" (UniqueName: \"kubernetes.io/projected/98eb00a2-9da3-459d-b011-7d92bcd6ed21-kube-api-access-rfb7d\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.791469 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eb00a2-9da3-459d-b011-7d92bcd6ed21-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:02 crc kubenswrapper[4907]: I0127 18:15:02.791670 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eb00a2-9da3-459d-b011-7d92bcd6ed21-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243372 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" event={"ID":"98eb00a2-9da3-459d-b011-7d92bcd6ed21","Type":"ContainerDied","Data":"b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6"} Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243414 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88f6767f4fd68e044541259acb6bf93287ffd86e079fdec4fc25cc2cfd19dd6" Jan 27 18:15:03 crc kubenswrapper[4907]: I0127 18:15:03.243908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.216747 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cc8bd7b4-59b72" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" containerID="cri-o://9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" gracePeriod=15 Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.586509 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cc8bd7b4-59b72_bbb41873-fa83-4786-b31d-d0d3ebeb902b/console/0.log" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.586602 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728010 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728071 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728101 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728892 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.728904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config" (OuterVolumeSpecName: "console-config") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729061 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729094 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729145 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729186 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729228 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") pod \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\" (UID: \"bbb41873-fa83-4786-b31d-d0d3ebeb902b\") " Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729507 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729524 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.729536 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.730062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca" (OuterVolumeSpecName: "service-ca") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.733219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq" (OuterVolumeSpecName: "kube-api-access-hz5wq") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "kube-api-access-hz5wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.733309 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.734065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bbb41873-fa83-4786-b31d-d0d3ebeb902b" (UID: "bbb41873-fa83-4786-b31d-d0d3ebeb902b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.830689 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz5wq\" (UniqueName: \"kubernetes.io/projected/bbb41873-fa83-4786-b31d-d0d3ebeb902b-kube-api-access-hz5wq\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831028 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb41873-fa83-4786-b31d-d0d3ebeb902b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831096 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[4907]: I0127 18:15:04.831167 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbb41873-fa83-4786-b31d-d0d3ebeb902b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262336 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cc8bd7b4-59b72_bbb41873-fa83-4786-b31d-d0d3ebeb902b/console/0.log" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262425 4907 generic.go:334] "Generic (PLEG): container finished" podID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" exitCode=2 Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerDied","Data":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc8bd7b4-59b72" event={"ID":"bbb41873-fa83-4786-b31d-d0d3ebeb902b","Type":"ContainerDied","Data":"b4aad96ead503466094394241ee313a5b94bd8b4c7ae6d6e87931f328c359954"} Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262541 4907 scope.go:117] "RemoveContainer" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.262748 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc8bd7b4-59b72" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.297024 4907 scope.go:117] "RemoveContainer" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: E0127 18:15:05.298022 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": container with ID starting with 9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2 not found: ID does not exist" containerID="9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.298138 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2"} err="failed to get container status \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": rpc error: code = NotFound desc = could not find container \"9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2\": container with ID starting with 9600e7f81569a729c50baa7d464638b9db96ec3270ff30e5935e292dbf9203c2 not found: ID does not exist" Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.316498 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.321773 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cc8bd7b4-59b72"] Jan 27 18:15:05 crc kubenswrapper[4907]: I0127 18:15:05.760357 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" path="/var/lib/kubelet/pods/bbb41873-fa83-4786-b31d-d0d3ebeb902b/volumes" Jan 27 18:15:26 crc kubenswrapper[4907]: I0127 18:15:26.522368 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:15:26 crc kubenswrapper[4907]: I0127 18:15:26.523240 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:46 crc kubenswrapper[4907]: I0127 18:15:46.422915 4907 scope.go:117] "RemoveContainer" containerID="ee0b36e78c4be660d4c081e70ceb4caf889b14b20ef5255003245d03dea37b84" Jan 27 18:15:46 crc kubenswrapper[4907]: I0127 18:15:46.437930 4907 scope.go:117] "RemoveContainer" containerID="4b9a25f367c300489066223e7c655f68dd2a0d8bca339cc8ab69304836e3cab8" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.521206 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.522256 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.522333 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.523432 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.523716 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" gracePeriod=600 Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716136 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" exitCode=0 Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded"} Jan 27 18:15:56 crc kubenswrapper[4907]: I0127 18:15:56.716309 4907 scope.go:117] "RemoveContainer" containerID="42d20c6a3c7e78cf4dce8449106267d123618c4b64f512fb555d0ba2befbdb39" Jan 27 18:15:57 crc kubenswrapper[4907]: I0127 18:15:57.727206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.427547 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:08 crc kubenswrapper[4907]: E0127 18:17:08.428496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428509 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: E0127 18:17:08.428518 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428525 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428665 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" containerName="collect-profiles" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.428674 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb41873-fa83-4786-b31d-d0d3ebeb902b" containerName="console" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.429487 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.432547 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.439005 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499259 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.499845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601165 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601393 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.601442 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.602282 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.602379 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.627672 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.786643 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:08 crc kubenswrapper[4907]: I0127 18:17:08.992014 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw"] Jan 27 18:17:09 crc kubenswrapper[4907]: I0127 18:17:09.280003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerStarted","Data":"1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341"} Jan 27 18:17:09 crc kubenswrapper[4907]: I0127 18:17:09.280066 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerStarted","Data":"c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5"} Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.291034 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341" exitCode=0 Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.291164 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"1bd10774a8b613771fc57d0da596cfbb2be7abf43d460e58524a9568bc042341"} Jan 27 18:17:10 crc kubenswrapper[4907]: I0127 18:17:10.294479 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:17:12 crc kubenswrapper[4907]: I0127 18:17:12.315423 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="5e3533c5a2464e64fa9ee0b2262a8a3b1226d6e695a8ded74e370605957f71ef" exitCode=0 Jan 27 18:17:12 crc kubenswrapper[4907]: I0127 18:17:12.315527 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"5e3533c5a2464e64fa9ee0b2262a8a3b1226d6e695a8ded74e370605957f71ef"} Jan 27 18:17:13 crc kubenswrapper[4907]: I0127 18:17:13.325585 4907 generic.go:334] "Generic (PLEG): container finished" podID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerID="8d3691680db889fd3cf4dd81427e8fe95ca47d5ddb14a685ad212084be35cc2d" exitCode=0 Jan 27 18:17:13 crc kubenswrapper[4907]: I0127 18:17:13.325643 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"8d3691680db889fd3cf4dd81427e8fe95ca47d5ddb14a685ad212084be35cc2d"} Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.721268 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796430 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796652 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.796678 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") pod \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\" (UID: \"23fc61bd-6b09-47f7-b16a-b71c959bef3d\") " Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.798578 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle" (OuterVolumeSpecName: "bundle") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.807479 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5" (OuterVolumeSpecName: "kube-api-access-f8zq5") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "kube-api-access-f8zq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.847351 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util" (OuterVolumeSpecName: "util") pod "23fc61bd-6b09-47f7-b16a-b71c959bef3d" (UID: "23fc61bd-6b09-47f7-b16a-b71c959bef3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898001 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898047 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8zq5\" (UniqueName: \"kubernetes.io/projected/23fc61bd-6b09-47f7-b16a-b71c959bef3d-kube-api-access-f8zq5\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:14 crc kubenswrapper[4907]: I0127 18:17:14.898066 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23fc61bd-6b09-47f7-b16a-b71c959bef3d-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350736 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" event={"ID":"23fc61bd-6b09-47f7-b16a-b71c959bef3d","Type":"ContainerDied","Data":"c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5"} Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350803 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94ead4396ccaaa12ad64c724d23981bc8d6f10d8d9aef6045fac5d6894727e5" Jan 27 18:17:15 crc kubenswrapper[4907]: I0127 18:17:15.350929 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.189971 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190733 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190745 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190763 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="util" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190769 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="util" Jan 27 18:17:26 crc kubenswrapper[4907]: E0127 18:17:26.190781 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="pull" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190788 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="pull" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.190888 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="23fc61bd-6b09-47f7-b16a-b71c959bef3d" containerName="extract" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.191388 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.193751 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.194219 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7677k" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.199920 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.204995 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.205786 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.213214 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.214483 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.219013 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-bkbgx" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.221894 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.222683 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.226464 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.241607 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.283975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284119 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.284137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385670 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385769 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.385874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.396355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.397282 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1a068f6-1c40-4947-b9bd-3b018ddcb25b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh\" (UID: \"c1a068f6-1c40-4947-b9bd-3b018ddcb25b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.400995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.400995 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91eb4541-31f7-488a-ae31-d57bfa265442-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s\" (UID: \"91eb4541-31f7-488a-ae31-d57bfa265442\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.405318 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.406061 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.406134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l77n\" (UniqueName: \"kubernetes.io/projected/d68ab367-2841-460c-b666-5b52ec455dd2-kube-api-access-9l77n\") pod \"obo-prometheus-operator-68bc856cb9-k7sff\" (UID: \"d68ab367-2841-460c-b666-5b52ec455dd2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.408406 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.408420 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-85xln" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.428857 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.487284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.487369 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.514808 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.540113 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.563767 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.590328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.590422 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.598598 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.599369 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.600484 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/812bcca3-8896-4492-86ff-1df596f0e604-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.602462 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jxxzz" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.609151 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdl4\" (UniqueName: \"kubernetes.io/projected/812bcca3-8896-4492-86ff-1df596f0e604-kube-api-access-qcdl4\") pod \"observability-operator-59bdc8b94-7x4fp\" (UID: \"812bcca3-8896-4492-86ff-1df596f0e604\") " pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.615372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.691860 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.691912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.790195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.797298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.797344 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.798879 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-openshift-service-ca\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.823157 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27rk\" (UniqueName: \"kubernetes.io/projected/99183c02-34c0-4a91-9e6e-0efd5d2a7a42-kube-api-access-d27rk\") pod \"perses-operator-5bf474d74f-65v8r\" (UID: \"99183c02-34c0-4a91-9e6e-0efd5d2a7a42\") " pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.924503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh"] Jan 27 18:17:26 crc kubenswrapper[4907]: W0127 18:17:26.936058 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a068f6_1c40_4947_b9bd_3b018ddcb25b.slice/crio-256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48 WatchSource:0}: Error finding container 256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48: Status 404 returned error can't find the container with id 256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48 Jan 27 18:17:26 crc kubenswrapper[4907]: I0127 18:17:26.948852 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.073935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s"] Jan 27 18:17:27 crc kubenswrapper[4907]: W0127 18:17:27.081347 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91eb4541_31f7_488a_ae31_d57bfa265442.slice/crio-ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24 WatchSource:0}: Error finding container ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24: Status 404 returned error can't find the container with id ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24 Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.104492 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff"] Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.244344 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-65v8r"] Jan 27 18:17:27 crc kubenswrapper[4907]: W0127 18:17:27.252904 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99183c02_34c0_4a91_9e6e_0efd5d2a7a42.slice/crio-3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af WatchSource:0}: Error finding container 3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af: Status 404 returned error can't find the container with id 3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.271936 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7x4fp"] Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.449134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"b7ccf81eeea2832b2057c4c1ffe73101b7b4c0075a4599a7452239bbd61e2e00"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.450262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" event={"ID":"91eb4541-31f7-488a-ae31-d57bfa265442","Type":"ContainerStarted","Data":"ac5d9a3f7726342050d00685a08ed74616689e051dd717cbe076eafd2d046e24"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.451206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" event={"ID":"99183c02-34c0-4a91-9e6e-0efd5d2a7a42","Type":"ContainerStarted","Data":"3c4f4f42e3cf10fa10a3e4783399ea46450a23bc1ecd7b9ca7760143d6ef87af"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.452204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" event={"ID":"d68ab367-2841-460c-b666-5b52ec455dd2","Type":"ContainerStarted","Data":"97485635ee4fefc2e26a2bba4980c89b490725c8eb1cfbe2f081c335d6bd9379"} Jan 27 18:17:27 crc kubenswrapper[4907]: I0127 18:17:27.453059 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" event={"ID":"c1a068f6-1c40-4947-b9bd-3b018ddcb25b","Type":"ContainerStarted","Data":"256d2e74757f84382aa0389c69b7e4bb189d63f89ad1530a3debedcec2eeea48"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.043903 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045137 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" containerID="cri-o://76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045316 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" containerID="cri-o://ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045349 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" containerID="cri-o://1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045380 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" containerID="cri-o://b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045414 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045476 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" containerID="cri-o://765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.045639 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" containerID="cri-o://e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.087160 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" containerID="cri-o://5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" gracePeriod=30 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.540485 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovnkube-controller/3.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.543464 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544055 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544510 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544544 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544568 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544580 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" exitCode=0 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544590 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" exitCode=143 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544599 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" exitCode=143 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544699 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544718 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544730 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544742 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544753 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.544772 4907 scope.go:117] "RemoveContainer" containerID="b48ad0fda114aa72a72bc0189a423e22fe01593ae61582627baad5b7934e07e7" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.548798 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549212 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/1.log" Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549249 4907 generic.go:334] "Generic (PLEG): container finished" podID="985b7738-a27c-4276-8160-c2baa64ab7f6" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" exitCode=2 Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549268 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerDied","Data":"14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4"} Jan 27 18:17:32 crc kubenswrapper[4907]: I0127 18:17:32.549758 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:32 crc kubenswrapper[4907]: E0127 18:17:32.550079 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.153767 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62f5e7d_70be_4705_a4b0_d5e4f531cfde.slice/crio-2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.193790 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.193906 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194105 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194171 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194336 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194361 4907 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194501 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 18:17:33 crc kubenswrapper[4907]: E0127 18:17:33.194524 4907 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.562081 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.562740 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563067 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" exitCode=0 Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563124 4907 generic.go:334] "Generic (PLEG): container finished" podID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerID="e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" exitCode=0 Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563126 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20"} Jan 27 18:17:33 crc kubenswrapper[4907]: I0127 18:17:33.563168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767"} Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.038761 4907 scope.go:117] "RemoveContainer" containerID="dda53c181ff78aaf08bce3556d02c2b61c59614b3fd7e5be49e9e2d341db4505" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.601632 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.602178 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.995609 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.995988 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:37 crc kubenswrapper[4907]: I0127 18:17:37.996313 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068308 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068328 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068387 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068408 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068428 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068452 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068472 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068503 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068529 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068567 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068594 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068622 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068682 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068707 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068722 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") pod \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\" (UID: \"a62f5e7d-70be-4705-a4b0-d5e4f531cfde\") " Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068784 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068933 4907 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.068974 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069005 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069022 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069423 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069468 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069922 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.069989 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070330 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070366 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket" (OuterVolumeSpecName: "log-socket") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070392 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070419 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070454 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070483 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log" (OuterVolumeSpecName: "node-log") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070634 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash" (OuterVolumeSpecName: "host-slash") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.070983 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.088509 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xx4s"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.089333 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.091660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q" (OuterVolumeSpecName: "kube-api-access-qkx4q") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "kube-api-access-qkx4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.091893 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.091974 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092058 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092132 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092206 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092286 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kubecfg-setup" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092433 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kubecfg-setup" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092504 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092631 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092715 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092781 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092855 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.092917 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.092982 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093043 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093111 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093181 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093381 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093457 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093528 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.093611 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093681 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093892 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-node" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.093976 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094049 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094118 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094194 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="sbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094259 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094330 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="northd" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094401 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovn-acl-logging" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094476 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094543 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094630 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="nbdb" Jan 27 18:17:38 crc kubenswrapper[4907]: E0127 18:17:38.094843 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.094919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.095144 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" containerName="ovnkube-controller" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.097710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.103975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a62f5e7d-70be-4705-a4b0-d5e4f531cfde" (UID: "a62f5e7d-70be-4705-a4b0-d5e4f531cfde"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.171913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172053 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172197 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172242 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172511 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172534 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172585 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172606 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172682 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172743 4907 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172755 4907 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172771 4907 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172784 4907 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172794 4907 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172803 4907 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172814 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172822 4907 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172832 4907 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172842 4907 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172854 4907 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172864 4907 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkx4q\" (UniqueName: \"kubernetes.io/projected/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-kube-api-access-qkx4q\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172882 4907 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172894 4907 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172902 4907 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172911 4907 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172919 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.172931 4907 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a62f5e7d-70be-4705-a4b0-d5e4f531cfde-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274186 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274252 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274279 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274345 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274382 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274404 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274443 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274502 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274518 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274694 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.274717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-ovn\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275108 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-netd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275132 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-run-netns\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-node-log\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.275788 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-config\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovnkube-script-lib\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276207 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-slash\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-systemd-units\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-run-systemd\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-etc-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-cni-bin\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-log-socket\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276324 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276344 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-host-kubelet\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276638 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-env-overrides\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.276674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-var-lib-openvswitch\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.281123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-ovn-node-metrics-cert\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.300376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9mj\" (UniqueName: \"kubernetes.io/projected/ee97e15a-ebc3-4c61-9841-9c1fb43fdee7-kube-api-access-4d9mj\") pod \"ovnkube-node-4xx4s\" (UID: \"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.537843 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:38 crc kubenswrapper[4907]: W0127 18:17:38.559942 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee97e15a_ebc3_4c61_9841_9c1fb43fdee7.slice/crio-78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976 WatchSource:0}: Error finding container 78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976: Status 404 returned error can't find the container with id 78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976 Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.615105 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-acl-logging/0.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.615989 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj9w2_a62f5e7d-70be-4705-a4b0-d5e4f531cfde/ovn-controller/0.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616469 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" event={"ID":"a62f5e7d-70be-4705-a4b0-d5e4f531cfde","Type":"ContainerDied","Data":"a983be7de95caeeef4ab80a270899c06c8966038c1e2373e1943b0a9d39bf946"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616588 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj9w2" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.616695 4907 scope.go:117] "RemoveContainer" containerID="5a8067782a2036bfd7d0190706c2df294256e816c477b42c1a74f9040dd85bf3" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.618452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.618697 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.621427 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.623502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" event={"ID":"91eb4541-31f7-488a-ae31-d57bfa265442","Type":"ContainerStarted","Data":"ef5c3012247def2a8d08a76ca0df8bb6d046453fb54ac1c37ff4a1a99b0ae52c"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.626243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" event={"ID":"99183c02-34c0-4a91-9e6e-0efd5d2a7a42","Type":"ContainerStarted","Data":"2e27f133bb71f31801a29b81348785c06c151d02579560c08aff145ecdfbfd7e"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.626288 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.628388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" event={"ID":"d68ab367-2841-460c-b666-5b52ec455dd2","Type":"ContainerStarted","Data":"6ffd4f14e8e49430c199c48ac416b2c29aaa36c06475a4885b4cb1188b6e8017"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.629940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"78f1a0cd13ed868896851d4c4fcbc68bd62dfe3ca2a136004cc2855ea149f976"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.631670 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" event={"ID":"c1a068f6-1c40-4947-b9bd-3b018ddcb25b","Type":"ContainerStarted","Data":"dc9b72d5182336e502b6892ee806e7a2caa695b0366e2eb155d131fbb2100f1b"} Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.654367 4907 scope.go:117] "RemoveContainer" containerID="ec9791678216ecd615f2906250a1a995629e19ab17edea268484b090aabbf199" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.655083 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.666389 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh" podStartSLOduration=2.566762876 podStartE2EDuration="12.666371353s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:26.940342679 +0000 UTC m=+702.069625291" lastFinishedPulling="2026-01-27 18:17:37.039951156 +0000 UTC m=+712.169233768" observedRunningTime="2026-01-27 18:17:38.664371285 +0000 UTC m=+713.793653907" watchObservedRunningTime="2026-01-27 18:17:38.666371353 +0000 UTC m=+713.795653965" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.669129 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podStartSLOduration=1.980635576 podStartE2EDuration="12.669116473s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.281307583 +0000 UTC m=+702.410590195" lastFinishedPulling="2026-01-27 18:17:37.96978848 +0000 UTC m=+713.099071092" observedRunningTime="2026-01-27 18:17:38.639177165 +0000 UTC m=+713.768459777" watchObservedRunningTime="2026-01-27 18:17:38.669116473 +0000 UTC m=+713.798399085" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.678661 4907 scope.go:117] "RemoveContainer" containerID="1411b3b29418c3a1a108f1b581b50dc853077f6055d0e864ee8685da3a80b69b" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.689681 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k7sff" podStartSLOduration=1.8708415729999999 podStartE2EDuration="12.689662218s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.144371244 +0000 UTC m=+702.273653856" lastFinishedPulling="2026-01-27 18:17:37.963191889 +0000 UTC m=+713.092474501" observedRunningTime="2026-01-27 18:17:38.688779333 +0000 UTC m=+713.818061945" watchObservedRunningTime="2026-01-27 18:17:38.689662218 +0000 UTC m=+713.818944830" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.705792 4907 scope.go:117] "RemoveContainer" containerID="b3bac2d284149d88e8b40cc9c6e72c99c87ced07e007598c1e54c9f6dfadae3f" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.715883 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s" podStartSLOduration=1.821582716 podStartE2EDuration="12.715863128s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.084356564 +0000 UTC m=+702.213639176" lastFinishedPulling="2026-01-27 18:17:37.978636976 +0000 UTC m=+713.107919588" observedRunningTime="2026-01-27 18:17:38.715692823 +0000 UTC m=+713.844975435" watchObservedRunningTime="2026-01-27 18:17:38.715863128 +0000 UTC m=+713.845145740" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.744702 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podStartSLOduration=2.020214153 podStartE2EDuration="12.744684353s" podCreationTimestamp="2026-01-27 18:17:26 +0000 UTC" firstStartedPulling="2026-01-27 18:17:27.255507775 +0000 UTC m=+702.384790387" lastFinishedPulling="2026-01-27 18:17:37.979977975 +0000 UTC m=+713.109260587" observedRunningTime="2026-01-27 18:17:38.742528741 +0000 UTC m=+713.871811363" watchObservedRunningTime="2026-01-27 18:17:38.744684353 +0000 UTC m=+713.873966965" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.762282 4907 scope.go:117] "RemoveContainer" containerID="2dc1a92a20aced7ca2889484a537d10bfed0bc3c139ca9f01a7ab92a870aab20" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.766988 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.770100 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj9w2"] Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.786901 4907 scope.go:117] "RemoveContainer" containerID="e77f74f97fbef690f4d9f80b7f4e60c14fd9378906e42139c7fafbcedc909767" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.813821 4907 scope.go:117] "RemoveContainer" containerID="765f0c4c7a50d4a6b8b23c2499211e4e83888f5122c808fb3500f376e40a0649" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.830704 4907 scope.go:117] "RemoveContainer" containerID="76539deaaf5bc3590aa51b0584c9594c20f22fa94cfc6560c48ff2a22449889b" Jan 27 18:17:38 crc kubenswrapper[4907]: I0127 18:17:38.850575 4907 scope.go:117] "RemoveContainer" containerID="4293ee9413fadc5e995781d565049f78682de4e71193eb55f3acb8008d525e71" Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.639500 4907 generic.go:334] "Generic (PLEG): container finished" podID="ee97e15a-ebc3-4c61-9841-9c1fb43fdee7" containerID="d07173aaf7602b3ab45a4d709aabf77031b98f2ac3561150b4a2a6613ff33c37" exitCode=0 Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.639581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerDied","Data":"d07173aaf7602b3ab45a4d709aabf77031b98f2ac3561150b4a2a6613ff33c37"} Jan 27 18:17:39 crc kubenswrapper[4907]: I0127 18:17:39.768466 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62f5e7d-70be-4705-a4b0-d5e4f531cfde" path="/var/lib/kubelet/pods/a62f5e7d-70be-4705-a4b0-d5e4f531cfde/volumes" Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.650827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"1b63f009934061028d552839eb69f33db38326b6aceebdd43b5123ee37779657"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"387301dd52081a90f0a09b5b30b1f1b3d04ff6b880b9fed6e5a7b30f31d34deb"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651194 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"4962d59ead012d39fb0997d4004e6272420eb60f308afbcef9b7ad45d445a7f1"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651206 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"2d0056deb1106a9ef6a8f2c1e1d70bf4263e749d13cea374adf31fed74393fa6"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651215 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"a5565b199cd5a01a375e8dc154be2d539a412d43cf397884107629c889982120"} Jan 27 18:17:40 crc kubenswrapper[4907]: I0127 18:17:40.651223 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"4b387cea73a6c270909f985cd35b570f16e45e4212fc6b5b7fe916047901e582"} Jan 27 18:17:43 crc kubenswrapper[4907]: I0127 18:17:43.685379 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"dd5ebe175d4d5f8e4da250274f3ad24e11a0f2c112e22ab69c4cbe00980c3dd5"} Jan 27 18:17:44 crc kubenswrapper[4907]: I0127 18:17:44.747794 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:44 crc kubenswrapper[4907]: E0127 18:17:44.748366 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fgtpz_openshift-multus(985b7738-a27c-4276-8160-c2baa64ab7f6)\"" pod="openshift-multus/multus-fgtpz" podUID="985b7738-a27c-4276-8160-c2baa64ab7f6" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.701611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" event={"ID":"ee97e15a-ebc3-4c61-9841-9c1fb43fdee7","Type":"ContainerStarted","Data":"a264f9c598193abd1c32a961064b3eaf7280e75fc675ed78fb7f62bc4306d43f"} Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.702218 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.739450 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" podStartSLOduration=7.739427181 podStartE2EDuration="7.739427181s" podCreationTimestamp="2026-01-27 18:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:45.733334924 +0000 UTC m=+720.862617536" watchObservedRunningTime="2026-01-27 18:17:45.739427181 +0000 UTC m=+720.868709793" Jan 27 18:17:45 crc kubenswrapper[4907]: I0127 18:17:45.761101 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.714678 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.715135 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.807134 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:17:46 crc kubenswrapper[4907]: I0127 18:17:46.956249 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.774980 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.776100 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783523 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vvz89" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.783829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.794467 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.834617 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.835901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.841261 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dghd4" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.843189 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.930289 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.931859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.933637 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.933746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.956409 4907 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l9jkf" Jan 27 18:17:47 crc kubenswrapper[4907]: I0127 18:17:47.964355 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036335 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036474 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.036521 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.063985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kq5w\" (UniqueName: \"kubernetes.io/projected/19be711f-36d9-46ae-8f7a-fdba490484da-kube-api-access-9kq5w\") pod \"cert-manager-cainjector-cf98fcc89-58hmb\" (UID: \"19be711f-36d9-46ae-8f7a-fdba490484da\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.081579 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjc8\" (UniqueName: \"kubernetes.io/projected/1fa35228-e301-48b5-b17b-21694e61ef16-kube-api-access-hvjc8\") pod \"cert-manager-858654f9db-jslkq\" (UID: \"1fa35228-e301-48b5-b17b-21694e61ef16\") " pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.116865 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.140120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152279 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152363 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152397 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.152450 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(c04677d8c9df0888ca46c517e2764ba3f07e8ae894a30fed5b3708094f869180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podUID="19be711f-36d9-46ae-8f7a-fdba490484da" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.164027 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzvw\" (UniqueName: \"kubernetes.io/projected/53565dd2-5a29-4ba0-9654-36b9600f765b-kube-api-access-hrzvw\") pod \"cert-manager-webhook-687f57d79b-jfhbt\" (UID: \"53565dd2-5a29-4ba0-9654-36b9600f765b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.197387 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.235877 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.235991 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.236017 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.236078 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(c0a59395c202f2c606786249a4e0af2fcb46fce2b908f2fe777e5897574d6e00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.280341 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299311 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299386 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299414 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.299468 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(8f951fdbf6ec980d4798a3115b6cb29985a372d421e325ec4ae9c0d2bade4977): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725173 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725216 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.725217 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726213 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726214 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: I0127 18:17:48.726325 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775126 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775197 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775220 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.775286 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-jfhbt_cert-manager(53565dd2-5a29-4ba0-9654-36b9600f765b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-jfhbt_cert-manager_53565dd2-5a29-4ba0-9654-36b9600f765b_0(6ac74bee930f68148680ac07734c051bda9ae7fe889f196ba52d77463f5ee28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786287 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786360 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786382 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.786436 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-58hmb_cert-manager(19be711f-36d9-46ae-8f7a-fdba490484da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-58hmb_cert-manager_19be711f-36d9-46ae-8f7a-fdba490484da_0(b3e2e5ab248ae966de38bb8a6baffba58b3cd04d7fd85a449db83171080bce98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podUID="19be711f-36d9-46ae-8f7a-fdba490484da" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792479 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792585 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792615 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:48 crc kubenswrapper[4907]: E0127 18:17:48.792673 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(3bef799e341ad156cb01013ffcde5bb0e25dae62914c3fa137ed8075d303f42f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:17:56 crc kubenswrapper[4907]: I0127 18:17:56.520947 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:17:56 crc kubenswrapper[4907]: I0127 18:17:56.521955 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:17:58 crc kubenswrapper[4907]: I0127 18:17:58.748194 4907 scope.go:117] "RemoveContainer" containerID="14b5e052edc9d584f105f6f14c22e4f3698d1e6bed62b8389665cf51f59b54b4" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.487330 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fgtpz_985b7738-a27c-4276-8160-c2baa64ab7f6/kube-multus/2.log" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.487393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fgtpz" event={"ID":"985b7738-a27c-4276-8160-c2baa64ab7f6","Type":"ContainerStarted","Data":"879127d9d3b1234efa50f4870cc4817f5577a6e333c6d3116383007bc83a5960"} Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.747506 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: I0127 18:17:59.748531 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789380 4907 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789470 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789500 4907 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:17:59 crc kubenswrapper[4907]: E0127 18:17:59.789582 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-jslkq_cert-manager(1fa35228-e301-48b5-b17b-21694e61ef16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-jslkq_cert-manager_1fa35228-e301-48b5-b17b-21694e61ef16_0(a6752db2394ffb8252e624de2e2924fcb04f7aa89da336369d059ebf7ae1645b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-jslkq" podUID="1fa35228-e301-48b5-b17b-21694e61ef16" Jan 27 18:18:01 crc kubenswrapper[4907]: I0127 18:18:01.747237 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:18:01 crc kubenswrapper[4907]: I0127 18:18:01.748505 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.198881 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-58hmb"] Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.509160 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" event={"ID":"19be711f-36d9-46ae-8f7a-fdba490484da","Type":"ContainerStarted","Data":"19d2d180bea7a3b89fbb0f4692d5b4240257cbe3f322e02ee1d4daba3774b7ba"} Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.748013 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.748738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:02 crc kubenswrapper[4907]: I0127 18:18:02.976205 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jfhbt"] Jan 27 18:18:02 crc kubenswrapper[4907]: W0127 18:18:02.981084 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53565dd2_5a29_4ba0_9654_36b9600f765b.slice/crio-956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e WatchSource:0}: Error finding container 956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e: Status 404 returned error can't find the container with id 956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e Jan 27 18:18:03 crc kubenswrapper[4907]: I0127 18:18:03.518282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" event={"ID":"53565dd2-5a29-4ba0-9654-36b9600f765b","Type":"ContainerStarted","Data":"956ad6068715b850c189ad9f324d016fb1cb56dfe0cfbb39d44f2c33a25cdc3e"} Jan 27 18:18:05 crc kubenswrapper[4907]: I0127 18:18:05.533753 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" event={"ID":"19be711f-36d9-46ae-8f7a-fdba490484da","Type":"ContainerStarted","Data":"b9092f590d1e086b8b02772319d398612206e4925a11ceacbe72157f2c8dd81f"} Jan 27 18:18:05 crc kubenswrapper[4907]: I0127 18:18:05.562033 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-58hmb" podStartSLOduration=16.056831649 podStartE2EDuration="18.5620105s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:02.207362034 +0000 UTC m=+737.336644656" lastFinishedPulling="2026-01-27 18:18:04.712540855 +0000 UTC m=+739.841823507" observedRunningTime="2026-01-27 18:18:05.555251845 +0000 UTC m=+740.684534467" watchObservedRunningTime="2026-01-27 18:18:05.5620105 +0000 UTC m=+740.691293112" Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.558747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" event={"ID":"53565dd2-5a29-4ba0-9654-36b9600f765b","Type":"ContainerStarted","Data":"16eb4e3e04684bc396f8b415958a6dfeff3981eeff07496a006170e7acbc673f"} Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.559297 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:07 crc kubenswrapper[4907]: I0127 18:18:07.592819 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podStartSLOduration=17.197585527 podStartE2EDuration="20.592788919s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:02.983792231 +0000 UTC m=+738.113074843" lastFinishedPulling="2026-01-27 18:18:06.378995603 +0000 UTC m=+741.508278235" observedRunningTime="2026-01-27 18:18:07.588921967 +0000 UTC m=+742.718204639" watchObservedRunningTime="2026-01-27 18:18:07.592788919 +0000 UTC m=+742.722071541" Jan 27 18:18:08 crc kubenswrapper[4907]: I0127 18:18:08.566414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.283532 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.748289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.749511 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jslkq" Jan 27 18:18:13 crc kubenswrapper[4907]: I0127 18:18:13.994265 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jslkq"] Jan 27 18:18:14 crc kubenswrapper[4907]: W0127 18:18:14.005289 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa35228_e301_48b5_b17b_21694e61ef16.slice/crio-f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c WatchSource:0}: Error finding container f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c: Status 404 returned error can't find the container with id f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c Jan 27 18:18:14 crc kubenswrapper[4907]: I0127 18:18:14.611934 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jslkq" event={"ID":"1fa35228-e301-48b5-b17b-21694e61ef16","Type":"ContainerStarted","Data":"f0828181fd22e037e705c9f01d892bbb35b85e66073ce15adc7fab16a9f1cf6c"} Jan 27 18:18:16 crc kubenswrapper[4907]: I0127 18:18:16.630767 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jslkq" event={"ID":"1fa35228-e301-48b5-b17b-21694e61ef16","Type":"ContainerStarted","Data":"aed37ffb5087a21e94cf614aab30edd147b7da31c8cc8cad9d7fb6626440d998"} Jan 27 18:18:16 crc kubenswrapper[4907]: I0127 18:18:16.649761 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jslkq" podStartSLOduration=28.004012081 podStartE2EDuration="29.649739058s" podCreationTimestamp="2026-01-27 18:17:47 +0000 UTC" firstStartedPulling="2026-01-27 18:18:14.008951996 +0000 UTC m=+749.138234608" lastFinishedPulling="2026-01-27 18:18:15.654678983 +0000 UTC m=+750.783961585" observedRunningTime="2026-01-27 18:18:16.647872844 +0000 UTC m=+751.777155506" watchObservedRunningTime="2026-01-27 18:18:16.649739058 +0000 UTC m=+751.779021680" Jan 27 18:18:20 crc kubenswrapper[4907]: I0127 18:18:20.775443 4907 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:18:26 crc kubenswrapper[4907]: I0127 18:18:26.521702 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:18:26 crc kubenswrapper[4907]: I0127 18:18:26.522713 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.795223 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.797897 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.800108 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.809044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:41 crc kubenswrapper[4907]: I0127 18:18:41.914254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.015902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.016410 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.016492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.048576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.118689 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.220328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.222128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.230495 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.400303 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt"] Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.422777 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.423100 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.423721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.525612 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526167 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526832 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.526921 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.551513 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.847177 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861252 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="c693cb97ad2ed5d0953eb99d80b718db43fb9e5c380da9ee76f90c687dfa7c0c" exitCode=0 Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"c693cb97ad2ed5d0953eb99d80b718db43fb9e5c380da9ee76f90c687dfa7c0c"} Jan 27 18:18:42 crc kubenswrapper[4907]: I0127 18:18:42.861363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerStarted","Data":"dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d"} Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.168797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn"] Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878396 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="5044a562f55391d794a14dae2b2518e06bf14ea180e2d8482fc61eae6edca11a" exitCode=0 Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"5044a562f55391d794a14dae2b2518e06bf14ea180e2d8482fc61eae6edca11a"} Jan 27 18:18:43 crc kubenswrapper[4907]: I0127 18:18:43.878499 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerStarted","Data":"e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb"} Jan 27 18:18:44 crc kubenswrapper[4907]: I0127 18:18:44.888691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"9d06897a8f80857ce60249f6df99fd6e0b16d7b140318e9d9cbfd00c6757b05f"} Jan 27 18:18:44 crc kubenswrapper[4907]: I0127 18:18:44.888643 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="9d06897a8f80857ce60249f6df99fd6e0b16d7b140318e9d9cbfd00c6757b05f" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: E0127 18:18:45.343685 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9834e6_1e3d_42b3_90bf_204c9fa7bb68.slice/crio-607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9834e6_1e3d_42b3_90bf_204c9fa7bb68.slice/crio-conmon-607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.546409 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.548855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.558893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.586688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.587098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.587205 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.687887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688275 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688427 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.688491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.689165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.719661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"redhat-operators-sbcj8\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.866805 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.901155 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="43222a645e1281b4851be0fc1194347ebc97a3396243980ab97b51d2ee170f7a" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.901270 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"43222a645e1281b4851be0fc1194347ebc97a3396243980ab97b51d2ee170f7a"} Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.912019 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerID="607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab" exitCode=0 Jan 27 18:18:45 crc kubenswrapper[4907]: I0127 18:18:45.912077 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"607467756dd8c7aeb72aec5ba2a2f61e97d4c61377a32e334b47f96ed2e348ab"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.162198 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:18:46 crc kubenswrapper[4907]: W0127 18:18:46.170498 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod232ce760_7804_4270_9073_256444e355ea.slice/crio-b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e WatchSource:0}: Error finding container b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e: Status 404 returned error can't find the container with id b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.921891 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c" exitCode=0 Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.922058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.922134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e"} Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.924880 4907 generic.go:334] "Generic (PLEG): container finished" podID="7584cc55-f71d-485d-aca5-31f66746f17a" containerID="67a75734d3f67f5024a57878fe87d33c694c4e4e09c46e83f70fa879a9c5dfb1" exitCode=0 Jan 27 18:18:46 crc kubenswrapper[4907]: I0127 18:18:46.925041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"67a75734d3f67f5024a57878fe87d33c694c4e4e09c46e83f70fa879a9c5dfb1"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.211575 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311532 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.311717 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") pod \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\" (UID: \"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68\") " Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.312779 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle" (OuterVolumeSpecName: "bundle") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.323786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l" (OuterVolumeSpecName: "kube-api-access-hbt6l") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "kube-api-access-hbt6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.333301 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util" (OuterVolumeSpecName: "util") pod "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" (UID: "3d9834e6-1e3d-42b3-90bf-204c9fa7bb68"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413368 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413412 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.413426 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbt6l\" (UniqueName: \"kubernetes.io/projected/3d9834e6-1e3d-42b3-90bf-204c9fa7bb68-kube-api-access-hbt6l\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.935075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939383 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" event={"ID":"3d9834e6-1e3d-42b3-90bf-204c9fa7bb68","Type":"ContainerDied","Data":"dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d"} Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939450 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd63713c46a940f4eaa87e1d3a4f01883be1ae7da3ba53929efeceadbbcc153d" Jan 27 18:18:47 crc kubenswrapper[4907]: I0127 18:18:47.939472 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.216428 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.329945 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.330170 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.330196 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") pod \"7584cc55-f71d-485d-aca5-31f66746f17a\" (UID: \"7584cc55-f71d-485d-aca5-31f66746f17a\") " Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.331202 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle" (OuterVolumeSpecName: "bundle") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.336062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj" (OuterVolumeSpecName: "kube-api-access-krpcj") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "kube-api-access-krpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.345642 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util" (OuterVolumeSpecName: "util") pod "7584cc55-f71d-485d-aca5-31f66746f17a" (UID: "7584cc55-f71d-485d-aca5-31f66746f17a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432455 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432502 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpcj\" (UniqueName: \"kubernetes.io/projected/7584cc55-f71d-485d-aca5-31f66746f17a-kube-api-access-krpcj\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.432519 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7584cc55-f71d-485d-aca5-31f66746f17a-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.948119 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850" exitCode=0 Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.948210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850"} Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952229 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" event={"ID":"7584cc55-f71d-485d-aca5-31f66746f17a","Type":"ContainerDied","Data":"e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb"} Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952307 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4559db33415e96708fad227340b9a44559451330b997036960b12b2ced113eb" Jan 27 18:18:48 crc kubenswrapper[4907]: I0127 18:18:48.952264 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn" Jan 27 18:18:49 crc kubenswrapper[4907]: I0127 18:18:49.962463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerStarted","Data":"262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027"} Jan 27 18:18:49 crc kubenswrapper[4907]: I0127 18:18:49.984877 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbcj8" podStartSLOduration=2.2095076909999998 podStartE2EDuration="4.984849885s" podCreationTimestamp="2026-01-27 18:18:45 +0000 UTC" firstStartedPulling="2026-01-27 18:18:46.924444882 +0000 UTC m=+782.053727494" lastFinishedPulling="2026-01-27 18:18:49.699787076 +0000 UTC m=+784.829069688" observedRunningTime="2026-01-27 18:18:49.984270768 +0000 UTC m=+785.113553410" watchObservedRunningTime="2026-01-27 18:18:49.984849885 +0000 UTC m=+785.114132537" Jan 27 18:18:55 crc kubenswrapper[4907]: I0127 18:18:55.867334 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:55 crc kubenswrapper[4907]: I0127 18:18:55.868152 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.521647 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.522436 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.522620 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.523627 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.523850 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" gracePeriod=600 Jan 27 18:18:56 crc kubenswrapper[4907]: I0127 18:18:56.918665 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbcj8" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" probeResult="failure" output=< Jan 27 18:18:56 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:18:56 crc kubenswrapper[4907]: > Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015267 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" exitCode=0 Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015342 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f"} Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015385 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} Jan 27 18:18:57 crc kubenswrapper[4907]: I0127 18:18:57.015409 4907 scope.go:117] "RemoveContainer" containerID="ab92d09fe428c3a9b4babe53db3cc7cade210df12baba2fd0ad23f40dd462ded" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.150957 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151718 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151734 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151758 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151766 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151780 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151789 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151801 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151808 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="util" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151816 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151822 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="pull" Jan 27 18:18:59 crc kubenswrapper[4907]: E0127 18:18:59.151841 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.151848 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152003 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9834e6-1e3d-42b3-90bf-204c9fa7bb68" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152022 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7584cc55-f71d-485d-aca5-31f66746f17a" containerName="extract" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.152822 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156203 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156248 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156704 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.156907 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-r7dgl" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.157327 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.173065 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.194128 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318087 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318115 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.318705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420604 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420627 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.420684 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.423188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/6347c63b-e1fb-4570-a350-68a9f9f1b79b-manager-config\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.435362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-apiservice-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.443366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-webhook-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.447858 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6347c63b-e1fb-4570-a350-68a9f9f1b79b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.451977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphcl\" (UniqueName: \"kubernetes.io/projected/6347c63b-e1fb-4570-a350-68a9f9f1b79b-kube-api-access-jphcl\") pod \"loki-operator-controller-manager-7b8dfd4994-zw4xr\" (UID: \"6347c63b-e1fb-4570-a350-68a9f9f1b79b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.470191 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:18:59 crc kubenswrapper[4907]: I0127 18:18:59.920510 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr"] Jan 27 18:18:59 crc kubenswrapper[4907]: W0127 18:18:59.931258 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6347c63b_e1fb_4570_a350_68a9f9f1b79b.slice/crio-242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5 WatchSource:0}: Error finding container 242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5: Status 404 returned error can't find the container with id 242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5 Jan 27 18:19:00 crc kubenswrapper[4907]: I0127 18:19:00.042674 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"242b5cb99ad77a4bafdb06a623cd97043c0bc185166eda551f8635a1258bf4b5"} Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.300816 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.303077 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.310833 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.311335 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zlzvs" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.311547 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.352256 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.398536 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.500805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.530431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45czk\" (UniqueName: \"kubernetes.io/projected/1f119aff-6ff6-4393-b7d5-19a981e50f3c-kube-api-access-45czk\") pod \"cluster-logging-operator-79cf69ddc8-t7bh6\" (UID: \"1f119aff-6ff6-4393-b7d5-19a981e50f3c\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:03 crc kubenswrapper[4907]: I0127 18:19:03.633349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" Jan 27 18:19:04 crc kubenswrapper[4907]: I0127 18:19:04.061215 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6"] Jan 27 18:19:04 crc kubenswrapper[4907]: I0127 18:19:04.079451 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" event={"ID":"1f119aff-6ff6-4393-b7d5-19a981e50f3c","Type":"ContainerStarted","Data":"166a42432633595c547ac0c203b7f6d63cae1ab66ad2c3094983ba0c2555bdae"} Jan 27 18:19:05 crc kubenswrapper[4907]: I0127 18:19:05.919441 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:05 crc kubenswrapper[4907]: I0127 18:19:05.995484 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:08 crc kubenswrapper[4907]: I0127 18:19:08.756225 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:08 crc kubenswrapper[4907]: I0127 18:19:08.757380 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sbcj8" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" containerID="cri-o://262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" gracePeriod=2 Jan 27 18:19:09 crc kubenswrapper[4907]: I0127 18:19:09.138806 4907 generic.go:334] "Generic (PLEG): container finished" podID="232ce760-7804-4270-9073-256444e355ea" containerID="262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" exitCode=0 Jan 27 18:19:09 crc kubenswrapper[4907]: I0127 18:19:09.138901 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027"} Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.924232 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961235 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.961316 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") pod \"232ce760-7804-4270-9073-256444e355ea\" (UID: \"232ce760-7804-4270-9073-256444e355ea\") " Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.963111 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities" (OuterVolumeSpecName: "utilities") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:19:10 crc kubenswrapper[4907]: I0127 18:19:10.976858 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs" (OuterVolumeSpecName: "kube-api-access-87sfs") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "kube-api-access-87sfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.063768 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.064183 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87sfs\" (UniqueName: \"kubernetes.io/projected/232ce760-7804-4270-9073-256444e355ea-kube-api-access-87sfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.102376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "232ce760-7804-4270-9073-256444e355ea" (UID: "232ce760-7804-4270-9073-256444e355ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.163308 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" event={"ID":"1f119aff-6ff6-4393-b7d5-19a981e50f3c","Type":"ContainerStarted","Data":"c983e9e44dd96374cfc020735027a4ddabc7d69698a655d7672ce298a3845248"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.165548 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232ce760-7804-4270-9073-256444e355ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.172367 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbcj8" event={"ID":"232ce760-7804-4270-9073-256444e355ea","Type":"ContainerDied","Data":"b9ad4bce1e8da4c10297f727ad5f50511fc4e0f15db9fefcdc1bfdc966d4f95e"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.172420 4907 scope.go:117] "RemoveContainer" containerID="262c0d69b829729aa0e4d411b99f2b2977e52e968241e1da1f74a8341db6f027" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.173632 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbcj8" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.176641 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0"} Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.206869 4907 scope.go:117] "RemoveContainer" containerID="0b3c9f14c11bb96a76c53addbf69badde303d045a746863657c2c06a615d8850" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.232645 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-t7bh6" podStartSLOduration=1.6616893259999999 podStartE2EDuration="8.232625126s" podCreationTimestamp="2026-01-27 18:19:03 +0000 UTC" firstStartedPulling="2026-01-27 18:19:04.072709094 +0000 UTC m=+799.201991706" lastFinishedPulling="2026-01-27 18:19:10.643644904 +0000 UTC m=+805.772927506" observedRunningTime="2026-01-27 18:19:11.207043208 +0000 UTC m=+806.336325820" watchObservedRunningTime="2026-01-27 18:19:11.232625126 +0000 UTC m=+806.361907738" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.244792 4907 scope.go:117] "RemoveContainer" containerID="8b0926806961ab50c9c488a6c1db2844dd920a9cd9b6b113a061e53caf3bf66c" Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.245165 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.255545 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sbcj8"] Jan 27 18:19:11 crc kubenswrapper[4907]: I0127 18:19:11.772337 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232ce760-7804-4270-9073-256444e355ea" path="/var/lib/kubelet/pods/232ce760-7804-4270-9073-256444e355ea/volumes" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.247437 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"fb1e2b89a26f7a786ddbaa85c7b6ba998e780bea2a3765880fc6333733e9e8a8"} Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.248380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.250753 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 18:19:18 crc kubenswrapper[4907]: I0127 18:19:18.277258 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podStartSLOduration=1.720722765 podStartE2EDuration="19.277228672s" podCreationTimestamp="2026-01-27 18:18:59 +0000 UTC" firstStartedPulling="2026-01-27 18:18:59.933215071 +0000 UTC m=+795.062497683" lastFinishedPulling="2026-01-27 18:19:17.489720978 +0000 UTC m=+812.619003590" observedRunningTime="2026-01-27 18:19:18.272882966 +0000 UTC m=+813.402165628" watchObservedRunningTime="2026-01-27 18:19:18.277228672 +0000 UTC m=+813.406511334" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.172505 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173703 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173737 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173751 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-content" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173758 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-content" Jan 27 18:19:23 crc kubenswrapper[4907]: E0127 18:19:23.173778 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-utilities" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173785 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="extract-utilities" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.173923 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="232ce760-7804-4270-9073-256444e355ea" containerName="registry-server" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.174582 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.176627 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.176789 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.180834 4907 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-hjd6j" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.191055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.212325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.212432 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.314485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.314642 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.319511 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.319590 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6daae88d10a28a27886d92c3dc3e6bcc3af2dddc5a85e66444be228d182862ed/globalmount\"" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.340808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84bnj\" (UniqueName: \"kubernetes.io/projected/09a57c24-4f8a-4799-82b4-1310608086fa-kube-api-access-84bnj\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.373714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d58a54b6-75ff-412a-8982-0b5d38383c94\") pod \"minio\" (UID: \"09a57c24-4f8a-4799-82b4-1310608086fa\") " pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.494930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 18:19:23 crc kubenswrapper[4907]: I0127 18:19:23.967726 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 18:19:24 crc kubenswrapper[4907]: I0127 18:19:24.295021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09a57c24-4f8a-4799-82b4-1310608086fa","Type":"ContainerStarted","Data":"1206abe6204b2eb0954982611d9a9219709aaef25f2f61a5b0f2a0f4aeba9ec1"} Jan 27 18:19:27 crc kubenswrapper[4907]: I0127 18:19:27.321125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09a57c24-4f8a-4799-82b4-1310608086fa","Type":"ContainerStarted","Data":"5f444665a79042371164241f9c56bdd93dc691f31aef2fc2d5d77141a9740b7a"} Jan 27 18:19:27 crc kubenswrapper[4907]: I0127 18:19:27.339386 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.203273228 podStartE2EDuration="7.339366056s" podCreationTimestamp="2026-01-27 18:19:20 +0000 UTC" firstStartedPulling="2026-01-27 18:19:23.984290666 +0000 UTC m=+819.113573278" lastFinishedPulling="2026-01-27 18:19:27.120383474 +0000 UTC m=+822.249666106" observedRunningTime="2026-01-27 18:19:27.337518122 +0000 UTC m=+822.466800734" watchObservedRunningTime="2026-01-27 18:19:27.339366056 +0000 UTC m=+822.468648668" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.031325 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.032862 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.035624 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-kc9zx" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.037313 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.037783 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.041810 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.046475 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.051659 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194206 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.194384 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.202620 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.203771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.206831 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.206856 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.209893 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.219702 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.295920 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296021 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296089 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296140 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.296178 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.298362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.299317 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-config\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.306923 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.307766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.319616 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.320473 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.322081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk8k\" (UniqueName: \"kubernetes.io/projected/bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542-kube-api-access-2fk8k\") pod \"logging-loki-distributor-5f678c8dd6-zhq64\" (UID: \"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.325509 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.327896 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.351701 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.352164 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406678 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406796 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406848 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406869 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406948 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.406971 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.407013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.469338 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.472669 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.476122 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.476468 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-5txtb" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480054 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480420 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480546 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.480693 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.495133 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.496632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.505191 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519763 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519930 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.519938 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520107 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520141 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520184 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.520219 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.524197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-config\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.527110 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.529229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.530027 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70874c1f-da0d-4389-8021-fd3003150fff-config\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.532051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.532064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.535833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.536445 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f62d8a1-62d1-4206-b061-f75c44ff2450-logging-loki-s3\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.537099 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/70874c1f-da0d-4389-8021-fd3003150fff-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.560612 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpk69\" (UniqueName: \"kubernetes.io/projected/8f62d8a1-62d1-4206-b061-f75c44ff2450-kube-api-access-tpk69\") pod \"logging-loki-querier-76788598db-r2fdr\" (UID: \"8f62d8a1-62d1-4206-b061-f75c44ff2450\") " pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.566545 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnfc\" (UniqueName: \"kubernetes.io/projected/70874c1f-da0d-4389-8021-fd3003150fff-kube-api-access-wsnfc\") pod \"logging-loki-query-frontend-69d9546745-4ngf2\" (UID: \"70874c1f-da0d-4389-8021-fd3003150fff\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.621805 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622133 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622264 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622338 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.622595 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.623917 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624003 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624060 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624191 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624284 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.624564 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.692438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725343 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725579 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725627 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.725645 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726500 4907 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726608 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret podName:d57b015c-f3fc-424d-b910-96e63c6da31a nodeName:}" failed. No retries permitted until 2026-01-27 18:19:33.226586376 +0000 UTC m=+828.355868988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret") pod "logging-loki-gateway-795ff9d55b-mwm5k" (UID: "d57b015c-f3fc-424d-b910-96e63c6da31a") : secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726682 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726810 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726905 4907 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: E0127 18:19:32.726938 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret podName:faf9da31-9bbb-43b4-9cc1-a80f95392ccf nodeName:}" failed. No retries permitted until 2026-01-27 18:19:33.226930026 +0000 UTC m=+828.356212638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret") pod "logging-loki-gateway-795ff9d55b-njxl9" (UID: "faf9da31-9bbb-43b4-9cc1-a80f95392ccf") : secret "logging-loki-gateway-http" not found Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.726965 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727072 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727705 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-rbac\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.727809 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d57b015c-f3fc-424d-b910-96e63c6da31a-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.728481 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.728737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-rbac\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.737925 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-lokistack-gateway\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.744202 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tenants\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.745219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tenants\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.745867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.746515 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.747201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b427f\" (UniqueName: \"kubernetes.io/projected/d57b015c-f3fc-424d-b910-96e63c6da31a-kube-api-access-b427f\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.749373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blgg\" (UniqueName: \"kubernetes.io/projected/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-kube-api-access-7blgg\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.820219 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:32 crc kubenswrapper[4907]: I0127 18:19:32.938517 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.050338 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.222680 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.224923 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.237919 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.239273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.239371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.248522 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.250728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d57b015c-f3fc-424d-b910-96e63c6da31a-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-mwm5k\" (UID: \"d57b015c-f3fc-424d-b910-96e63c6da31a\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.253248 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/faf9da31-9bbb-43b4-9cc1-a80f95392ccf-tls-secret\") pod \"logging-loki-gateway-795ff9d55b-njxl9\" (UID: \"faf9da31-9bbb-43b4-9cc1-a80f95392ccf\") " pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.265810 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.317022 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.318731 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.321849 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.323244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.324114 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341742 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341794 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341827 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341861 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341903 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341923 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.341942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.342002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.374272 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-r2fdr"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.393873 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.394476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" event={"ID":"70874c1f-da0d-4389-8021-fd3003150fff","Type":"ContainerStarted","Data":"d673e719e31fdd890b64083c125e558045e8d73c0edc9b3ceab510c99da31a04"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.401049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" event={"ID":"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542","Type":"ContainerStarted","Data":"efde6641e3402f3e93ef12c37377589e9791c9f9d73577e58fbe0df9b7e2b504"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.403255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" event={"ID":"8f62d8a1-62d1-4206-b061-f75c44ff2450","Type":"ContainerStarted","Data":"c42dde8c073c678dad062c43250215a91efba2a5a88761ae559d49bdfa0cc2c7"} Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.408778 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.410289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.415200 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.415469 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.427312 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.443499 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.443629 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.444938 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445083 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445132 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445204 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445229 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445297 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445390 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445427 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.445475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.447784 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-config\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.452978 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.453184 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ccc690943db56f0ff69e57c33c7a66f3c532dfa1618c3e0713165343c19787a9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.453957 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.459111 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.461476 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.461544 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b90cd83c2d852d1e2674df5af163b9c3d8362e5073e72af605774f87d64e892f/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.463734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vlv\" (UniqueName: \"kubernetes.io/projected/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-kube-api-access-r4vlv\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.464253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.464774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b4b16e-4eff-46be-aac5-63d2b3d8fdf2-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.491383 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7be3af20-b1b1-42cd-b972-0b5dce5dc379\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.502252 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-781f9dd2-0de7-48f0-864e-77bafe73e48f\") pod \"logging-loki-ingester-0\" (UID: \"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546968 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.546993 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547017 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547075 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547099 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547129 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547175 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547207 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.547229 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.549046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.551956 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-config\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553334 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553504 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.553543 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99d78d22969a0c3dbf26b0f48c3c2ad420f6ccda5c59f1119c34ac875b1b6e7f/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.554123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.557495 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.564939 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jvq\" (UniqueName: \"kubernetes.io/projected/2448dad5-d0f7-4335-a3fb-a23c5ef59bbf-kube-api-access-v9jvq\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.575010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.597701 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1b7f5d9-fe84-4a56-a715-9c55df75eab7\") pod \"logging-loki-compactor-0\" (UID: \"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648116 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648206 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648301 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648398 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.648417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.650482 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.650873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.653836 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.655518 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.655540 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e85ec94bd0636207f226e146b2f799242e2ac55c6d7e547e5ea0a6a5c7b9b6c6/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.656982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.659063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a9dc6389-0ad3-4259-aaf2-945493e66aa2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.672232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9mr\" (UniqueName: \"kubernetes.io/projected/a9dc6389-0ad3-4259-aaf2-945493e66aa2-kube-api-access-lv9mr\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.685226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f6fb176-b10b-4194-8d04-3c17c8a2bf8b\") pod \"logging-loki-index-gateway-0\" (UID: \"a9dc6389-0ad3-4259-aaf2-945493e66aa2\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.704685 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.807023 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.956284 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 18:19:33 crc kubenswrapper[4907]: I0127 18:19:33.999045 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-njxl9"] Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.004347 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf9da31_9bbb_43b4_9cc1_a80f95392ccf.slice/crio-caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529 WatchSource:0}: Error finding container caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529: Status 404 returned error can't find the container with id caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529 Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.047734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k"] Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.051343 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57b015c_f3fc_424d_b910_96e63c6da31a.slice/crio-d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31 WatchSource:0}: Error finding container d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31: Status 404 returned error can't find the container with id d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31 Jan 27 18:19:34 crc kubenswrapper[4907]: W0127 18:19:34.084497 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2448dad5_d0f7_4335_a3fb_a23c5ef59bbf.slice/crio-fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843 WatchSource:0}: Error finding container fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843: Status 404 returned error can't find the container with id fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843 Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.085892 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.379655 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.415038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"caf09dc4d6d18632ec29e394ab3dab905057586177fc1cd02146a30eff5ab529"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.417056 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"d16d7f517f79d63ac92793522ddf8ad050c862b9b282b7494564e7b5748e4c31"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.419109 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2","Type":"ContainerStarted","Data":"057cda60221ca368ed15ef39acc10b01864652a5b999174af6ea352ff0dae47b"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.420458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf","Type":"ContainerStarted","Data":"fe478ee710f3101034594255779043f90ebe5615b4f4466ceabe115a2cdab843"} Jan 27 18:19:34 crc kubenswrapper[4907]: I0127 18:19:34.422677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a9dc6389-0ad3-4259-aaf2-945493e66aa2","Type":"ContainerStarted","Data":"feac54ef90d07c99482c20fa97ec9ab8574866da10d6ac8526e21ba4e439bd0d"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.470310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" event={"ID":"8f62d8a1-62d1-4206-b061-f75c44ff2450","Type":"ContainerStarted","Data":"f83c32ae9c0c6afb94fc6ec1f7e91a867355d74476c7c65fcc7cac0b83fecf85"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.472136 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.474161 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" event={"ID":"70874c1f-da0d-4389-8021-fd3003150fff","Type":"ContainerStarted","Data":"c85a941bd03ea248f5abdb09c90c9de2104f44a5f567cd252cadeb077a3e0255"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.474771 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.477702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" event={"ID":"bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542","Type":"ContainerStarted","Data":"3b8c71675ff7a39e19d56b4367412719dcd2aca6c99d5932c4074db781c0db9b"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.478246 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.479816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b4b16e-4eff-46be-aac5-63d2b3d8fdf2","Type":"ContainerStarted","Data":"628096152c42ecc3f5abe46b0bcb43ce9dc5cf31388d2b6e8934e2f3767a0f9b"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.480418 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.481771 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2448dad5-d0f7-4335-a3fb-a23c5ef59bbf","Type":"ContainerStarted","Data":"b2aa753307f2618fbffb81bc7e29e603b6cd8f43925033a06ded1fe06c58ff19"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.482327 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.508043 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podStartSLOduration=2.493857602 podStartE2EDuration="5.50801558s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:33.372158892 +0000 UTC m=+828.501441504" lastFinishedPulling="2026-01-27 18:19:36.38631686 +0000 UTC m=+831.515599482" observedRunningTime="2026-01-27 18:19:37.492664437 +0000 UTC m=+832.621947069" watchObservedRunningTime="2026-01-27 18:19:37.50801558 +0000 UTC m=+832.637298182" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.515118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a9dc6389-0ad3-4259-aaf2-945493e66aa2","Type":"ContainerStarted","Data":"f339e45730c774c02834a2c8b251a0ea3a4e6ee3f94ebf708e9bde61436a86ea"} Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.515589 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.523911 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podStartSLOduration=1.893670965 podStartE2EDuration="5.523890978s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:33.054623635 +0000 UTC m=+828.183906257" lastFinishedPulling="2026-01-27 18:19:36.684843658 +0000 UTC m=+831.814126270" observedRunningTime="2026-01-27 18:19:37.514337303 +0000 UTC m=+832.643619915" watchObservedRunningTime="2026-01-27 18:19:37.523890978 +0000 UTC m=+832.653173590" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.536230 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.026912889 podStartE2EDuration="5.536202774s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.086732949 +0000 UTC m=+829.216015561" lastFinishedPulling="2026-01-27 18:19:36.596022834 +0000 UTC m=+831.725305446" observedRunningTime="2026-01-27 18:19:37.534086223 +0000 UTC m=+832.663368835" watchObservedRunningTime="2026-01-27 18:19:37.536202774 +0000 UTC m=+832.665485386" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.561187 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.129414238 podStartE2EDuration="5.561148994s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.00258238 +0000 UTC m=+829.131864992" lastFinishedPulling="2026-01-27 18:19:36.434317106 +0000 UTC m=+831.563599748" observedRunningTime="2026-01-27 18:19:37.55650057 +0000 UTC m=+832.685783192" watchObservedRunningTime="2026-01-27 18:19:37.561148994 +0000 UTC m=+832.690431606" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.591297 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podStartSLOduration=2.17683583 podStartE2EDuration="5.591262503s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:32.950602223 +0000 UTC m=+828.079884835" lastFinishedPulling="2026-01-27 18:19:36.365028896 +0000 UTC m=+831.494311508" observedRunningTime="2026-01-27 18:19:37.578253288 +0000 UTC m=+832.707535910" watchObservedRunningTime="2026-01-27 18:19:37.591262503 +0000 UTC m=+832.720545125" Jan 27 18:19:37 crc kubenswrapper[4907]: I0127 18:19:37.603311 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.3130834399999998 podStartE2EDuration="5.60328941s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.388221592 +0000 UTC m=+829.517504204" lastFinishedPulling="2026-01-27 18:19:36.678427562 +0000 UTC m=+831.807710174" observedRunningTime="2026-01-27 18:19:37.600992164 +0000 UTC m=+832.730274776" watchObservedRunningTime="2026-01-27 18:19:37.60328941 +0000 UTC m=+832.732572022" Jan 27 18:19:38 crc kubenswrapper[4907]: I0127 18:19:38.525406 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"9fb9edcfa7e6be0446bac646702dd512220d81b7c1403076c08933a25eb3ac09"} Jan 27 18:19:38 crc kubenswrapper[4907]: I0127 18:19:38.527849 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"a85acb993e555d7a09c2f5f4c314467ba64332a42535476888d3ab36570ae963"} Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.546808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" event={"ID":"d57b015c-f3fc-424d-b910-96e63c6da31a","Type":"ContainerStarted","Data":"ba77dc4cbb0aba628360200d4ea5f2cec320a2ff6c85474ef9b6f3f64b7fc4ca"} Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.547991 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.548078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.549668 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": dial tcp 10.217.0.50:8083: connect: connection refused" start-of-body= Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.549759 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": dial tcp 10.217.0.50:8083: connect: connection refused" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.572331 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:40 crc kubenswrapper[4907]: I0127 18:19:40.591394 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podStartSLOduration=2.383150046 podStartE2EDuration="8.591371256s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.056754964 +0000 UTC m=+829.186037576" lastFinishedPulling="2026-01-27 18:19:40.264976174 +0000 UTC m=+835.394258786" observedRunningTime="2026-01-27 18:19:40.583217291 +0000 UTC m=+835.712499903" watchObservedRunningTime="2026-01-27 18:19:40.591371256 +0000 UTC m=+835.720653888" Jan 27 18:19:41 crc kubenswrapper[4907]: I0127 18:19:41.576283 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.366823 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.703507 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" Jan 27 18:19:52 crc kubenswrapper[4907]: I0127 18:19:52.829958 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.583769 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.583862 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.716057 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 27 18:19:53 crc kubenswrapper[4907]: I0127 18:19:53.823787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 18:20:03 crc kubenswrapper[4907]: I0127 18:20:03.580306 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 18:20:03 crc kubenswrapper[4907]: I0127 18:20:03.581213 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.833640 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" event={"ID":"faf9da31-9bbb-43b4-9cc1-a80f95392ccf","Type":"ContainerStarted","Data":"b2a5a65f14b1bc2e166f1c286f3c1bfb74a3d9022a08bbd596d0fdb92264a1ff"} Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.834551 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.834593 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.849310 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.854651 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" Jan 27 18:20:10 crc kubenswrapper[4907]: I0127 18:20:10.867469 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podStartSLOduration=2.755317419 podStartE2EDuration="38.867443987s" podCreationTimestamp="2026-01-27 18:19:32 +0000 UTC" firstStartedPulling="2026-01-27 18:19:34.007870613 +0000 UTC m=+829.137153225" lastFinishedPulling="2026-01-27 18:20:10.119997141 +0000 UTC m=+865.249279793" observedRunningTime="2026-01-27 18:20:10.860097595 +0000 UTC m=+865.989380207" watchObservedRunningTime="2026-01-27 18:20:10.867443987 +0000 UTC m=+865.996726609" Jan 27 18:20:13 crc kubenswrapper[4907]: I0127 18:20:13.581449 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 18:20:13 crc kubenswrapper[4907]: I0127 18:20:13.582024 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.394576 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.397311 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.409258 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548403 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548502 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.548549 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.650933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651058 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651549 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.651890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.674463 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"community-operators-ptc2v\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:22 crc kubenswrapper[4907]: I0127 18:20:22.726434 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.326826 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:23 crc kubenswrapper[4907]: W0127 18:20:23.332866 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d4cec2_3672_4d27_aac3_0e29e9f913aa.slice/crio-23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10 WatchSource:0}: Error finding container 23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10: Status 404 returned error can't find the container with id 23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10 Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.580012 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.580578 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971399 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" exitCode=0 Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24"} Jan 27 18:20:23 crc kubenswrapper[4907]: I0127 18:20:23.971504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10"} Jan 27 18:20:24 crc kubenswrapper[4907]: I0127 18:20:24.980587 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992333 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" exitCode=0 Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} Jan 27 18:20:25 crc kubenswrapper[4907]: I0127 18:20:25.992869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerStarted","Data":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} Jan 27 18:20:26 crc kubenswrapper[4907]: I0127 18:20:26.021693 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptc2v" podStartSLOduration=2.605695844 podStartE2EDuration="4.021654558s" podCreationTimestamp="2026-01-27 18:20:22 +0000 UTC" firstStartedPulling="2026-01-27 18:20:23.974226736 +0000 UTC m=+879.103509348" lastFinishedPulling="2026-01-27 18:20:25.39018545 +0000 UTC m=+880.519468062" observedRunningTime="2026-01-27 18:20:26.015316435 +0000 UTC m=+881.144599047" watchObservedRunningTime="2026-01-27 18:20:26.021654558 +0000 UTC m=+881.150937190" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.727411 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.729749 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:32 crc kubenswrapper[4907]: I0127 18:20:32.799920 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.114793 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.171690 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:33 crc kubenswrapper[4907]: I0127 18:20:33.583193 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.083114 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptc2v" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" containerID="cri-o://d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" gracePeriod=2 Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.724771 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.824780 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.824883 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.825037 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") pod \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\" (UID: \"e6d4cec2-3672-4d27-aac3-0e29e9f913aa\") " Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.825916 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities" (OuterVolumeSpecName: "utilities") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.839506 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf" (OuterVolumeSpecName: "kube-api-access-r26tf") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "kube-api-access-r26tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.927253 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:35 crc kubenswrapper[4907]: I0127 18:20:35.927303 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26tf\" (UniqueName: \"kubernetes.io/projected/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-kube-api-access-r26tf\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.031602 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6d4cec2-3672-4d27-aac3-0e29e9f913aa" (UID: "e6d4cec2-3672-4d27-aac3-0e29e9f913aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093246 4907 generic.go:334] "Generic (PLEG): container finished" podID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" exitCode=0 Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093340 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptc2v" event={"ID":"e6d4cec2-3672-4d27-aac3-0e29e9f913aa","Type":"ContainerDied","Data":"23834142072aa6a4889014a028ce2205e83d68b34f4267924d5ddf33b5141a10"} Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093363 4907 scope.go:117] "RemoveContainer" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.093501 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptc2v" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.130617 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d4cec2-3672-4d27-aac3-0e29e9f913aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.130982 4907 scope.go:117] "RemoveContainer" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.134073 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.142126 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptc2v"] Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.151414 4907 scope.go:117] "RemoveContainer" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.191589 4907 scope.go:117] "RemoveContainer" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.191970 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": container with ID starting with d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c not found: ID does not exist" containerID="d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192009 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c"} err="failed to get container status \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": rpc error: code = NotFound desc = could not find container \"d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c\": container with ID starting with d8eec4dc10e150d36680c282e09b33bbc803a6d49128c8fdffede7451f54152c not found: ID does not exist" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192036 4907 scope.go:117] "RemoveContainer" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.192655 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": container with ID starting with 060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478 not found: ID does not exist" containerID="060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192690 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478"} err="failed to get container status \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": rpc error: code = NotFound desc = could not find container \"060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478\": container with ID starting with 060eb74a99a4f040715f8a9dd3d3aeba211ffee6649db5562e0c9463f949e478 not found: ID does not exist" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.192703 4907 scope.go:117] "RemoveContainer" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: E0127 18:20:36.193070 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": container with ID starting with 310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24 not found: ID does not exist" containerID="310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24" Jan 27 18:20:36 crc kubenswrapper[4907]: I0127 18:20:36.193096 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24"} err="failed to get container status \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": rpc error: code = NotFound desc = could not find container \"310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24\": container with ID starting with 310ddf08457ce44f2c88f6506cb141e303257c3b234d8bb1c63125e395c43f24 not found: ID does not exist" Jan 27 18:20:37 crc kubenswrapper[4907]: I0127 18:20:37.762791 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" path="/var/lib/kubelet/pods/e6d4cec2-3672-4d27-aac3-0e29e9f913aa/volumes" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.263255 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264398 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-content" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264483 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-content" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264500 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264509 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.264547 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-utilities" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264637 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="extract-utilities" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.264832 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d4cec2-3672-4d27-aac3-0e29e9f913aa" containerName="registry-server" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.265664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.268851 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.272677 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.272997 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-npdhl" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.273154 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.273462 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.282352 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.297361 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307467 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307584 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307607 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307734 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.307838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.342628 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.343266 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zn9h8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-q5b5q" podUID="29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409283 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409341 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409506 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409537 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409628 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.409676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.410313 4907 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 27 18:20:52 crc kubenswrapper[4907]: E0127 18:20:52.410430 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver podName:29bf2bd0-d34f-4ca1-b84f-bb7a003039f4 nodeName:}" failed. No retries permitted until 2026-01-27 18:20:52.910404628 +0000 UTC m=+908.039687240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver") pod "collector-q5b5q" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4") : secret "collector-syslog-receiver" not found Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.410549 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.410341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.411491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.411580 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.427689 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.432409 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.438350 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.438734 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.439336 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.918412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:52 crc kubenswrapper[4907]: I0127 18:20:52.921626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"collector-q5b5q\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.246524 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.265138 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427604 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427703 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427753 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir" (OuterVolumeSpecName: "datadir") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427833 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.427892 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config" (OuterVolumeSpecName: "config") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428650 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428710 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428716 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428766 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428809 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428843 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") pod \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\" (UID: \"29bf2bd0-d34f-4ca1-b84f-bb7a003039f4\") " Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.428805 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429489 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429521 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429537 4907 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-datadir\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429572 4907 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.429590 4907 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.431911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.432212 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics" (OuterVolumeSpecName: "metrics") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.432684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8" (OuterVolumeSpecName: "kube-api-access-zn9h8") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "kube-api-access-zn9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.433054 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token" (OuterVolumeSpecName: "sa-token") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.433286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token" (OuterVolumeSpecName: "collector-token") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.434050 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp" (OuterVolumeSpecName: "tmp") pod "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" (UID: "29bf2bd0-d34f-4ca1-b84f-bb7a003039f4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531113 4907 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-tmp\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531510 4907 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531626 4907 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531715 4907 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531805 4907 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:53 crc kubenswrapper[4907]: I0127 18:20:53.531886 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn9h8\" (UniqueName: \"kubernetes.io/projected/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4-kube-api-access-zn9h8\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.256261 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-q5b5q" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.300013 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.315116 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-q5b5q"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.321465 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.322682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.325391 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.325813 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-npdhl" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.326302 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.326497 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.332642 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.337645 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.355449 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449901 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449957 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.449977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450039 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450107 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450187 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450246 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450304 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450385 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.450439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552417 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552512 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.552666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.553386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.553831 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config-openshift-service-cacrt\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-config\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-entrypoint\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554612 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.554812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e66fb20d-fb54-4964-9fb8-0ca14b94f895-datadir\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e66fb20d-fb54-4964-9fb8-0ca14b94f895-tmp\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556129 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.556546 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.558182 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-metrics\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.569242 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e66fb20d-fb54-4964-9fb8-0ca14b94f895-collector-syslog-receiver\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.574766 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-sa-token\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.580842 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e66fb20d-fb54-4964-9fb8-0ca14b94f895-trusted-ca\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.580899 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wdj\" (UniqueName: \"kubernetes.io/projected/e66fb20d-fb54-4964-9fb8-0ca14b94f895-kube-api-access-k4wdj\") pod \"collector-2bmhz\" (UID: \"e66fb20d-fb54-4964-9fb8-0ca14b94f895\") " pod="openshift-logging/collector-2bmhz" Jan 27 18:20:54 crc kubenswrapper[4907]: I0127 18:20:54.659123 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-2bmhz" Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.128656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-2bmhz"] Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.268945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2bmhz" event={"ID":"e66fb20d-fb54-4964-9fb8-0ca14b94f895","Type":"ContainerStarted","Data":"083145679fa1ab5e2ff32c39e686a219dfbbcf851f08a4ce0ab950503451880b"} Jan 27 18:20:55 crc kubenswrapper[4907]: I0127 18:20:55.763150 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bf2bd0-d34f-4ca1-b84f-bb7a003039f4" path="/var/lib/kubelet/pods/29bf2bd0-d34f-4ca1-b84f-bb7a003039f4/volumes" Jan 27 18:20:56 crc kubenswrapper[4907]: I0127 18:20:56.521766 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:20:56 crc kubenswrapper[4907]: I0127 18:20:56.522103 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:02 crc kubenswrapper[4907]: I0127 18:21:02.335463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-2bmhz" event={"ID":"e66fb20d-fb54-4964-9fb8-0ca14b94f895","Type":"ContainerStarted","Data":"5743ba0e03537f31e433c3a8183e7e1b98e4307f1bd6dc7dee8a350f54b02297"} Jan 27 18:21:02 crc kubenswrapper[4907]: I0127 18:21:02.370107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-2bmhz" podStartSLOduration=1.8599209349999999 podStartE2EDuration="8.370082826s" podCreationTimestamp="2026-01-27 18:20:54 +0000 UTC" firstStartedPulling="2026-01-27 18:20:55.114255302 +0000 UTC m=+910.243537914" lastFinishedPulling="2026-01-27 18:21:01.624417193 +0000 UTC m=+916.753699805" observedRunningTime="2026-01-27 18:21:02.368521981 +0000 UTC m=+917.497804603" watchObservedRunningTime="2026-01-27 18:21:02.370082826 +0000 UTC m=+917.499365448" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.201401 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.215231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.220357 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.413266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.413935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.414092 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515693 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.515997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.516594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.516891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.541853 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"redhat-marketplace-hzcvt\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:23 crc kubenswrapper[4907]: I0127 18:21:23.550929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.047541 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561119 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" exitCode=0 Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561252 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d"} Jan 27 18:21:24 crc kubenswrapper[4907]: I0127 18:21:24.561341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"698b12fed85423f3f2113a134a20e1d4960384ade0e7258431251b0d5a827251"} Jan 27 18:21:25 crc kubenswrapper[4907]: I0127 18:21:25.572570 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.521740 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.522130 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.583380 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" exitCode=0 Jan 27 18:21:26 crc kubenswrapper[4907]: I0127 18:21:26.583447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} Jan 27 18:21:27 crc kubenswrapper[4907]: I0127 18:21:27.592913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerStarted","Data":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} Jan 27 18:21:27 crc kubenswrapper[4907]: I0127 18:21:27.618121 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hzcvt" podStartSLOduration=2.190707799 podStartE2EDuration="4.618101996s" podCreationTimestamp="2026-01-27 18:21:23 +0000 UTC" firstStartedPulling="2026-01-27 18:21:24.564162878 +0000 UTC m=+939.693445490" lastFinishedPulling="2026-01-27 18:21:26.991557035 +0000 UTC m=+942.120839687" observedRunningTime="2026-01-27 18:21:27.611677051 +0000 UTC m=+942.740959663" watchObservedRunningTime="2026-01-27 18:21:27.618101996 +0000 UTC m=+942.747384598" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.551715 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.552627 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.621057 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.718663 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:33 crc kubenswrapper[4907]: I0127 18:21:33.873910 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.332681 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.334404 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.336551 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.347715 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437356 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.437614 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.539604 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.540134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.540185 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.563565 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.652580 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:35 crc kubenswrapper[4907]: I0127 18:21:35.657642 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hzcvt" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" containerID="cri-o://cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" gracePeriod=2 Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.132304 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257738 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.257774 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") pod \"da2a8ac5-42c7-4326-aef8-b7f713af971d\" (UID: \"da2a8ac5-42c7-4326-aef8-b7f713af971d\") " Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.259001 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities" (OuterVolumeSpecName: "utilities") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.278886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8" (OuterVolumeSpecName: "kube-api-access-mmmp8") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "kube-api-access-mmmp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.306808 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2a8ac5-42c7-4326-aef8-b7f713af971d" (UID: "da2a8ac5-42c7-4326-aef8-b7f713af971d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.314311 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359453 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359488 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2a8ac5-42c7-4326-aef8-b7f713af971d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.359499 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmmp8\" (UniqueName: \"kubernetes.io/projected/da2a8ac5-42c7-4326-aef8-b7f713af971d-kube-api-access-mmmp8\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666773 4907 generic.go:334] "Generic (PLEG): container finished" podID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" exitCode=0 Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666844 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.666888 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzcvt" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.667206 4907 scope.go:117] "RemoveContainer" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.667184 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzcvt" event={"ID":"da2a8ac5-42c7-4326-aef8-b7f713af971d","Type":"ContainerDied","Data":"698b12fed85423f3f2113a134a20e1d4960384ade0e7258431251b0d5a827251"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.668631 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerStarted","Data":"2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672"} Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.687115 4907 scope.go:117] "RemoveContainer" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.722598 4907 scope.go:117] "RemoveContainer" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.736712 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.748819 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzcvt"] Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.752302 4907 scope.go:117] "RemoveContainer" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.752957 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": container with ID starting with cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264 not found: ID does not exist" containerID="cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.753154 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264"} err="failed to get container status \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": rpc error: code = NotFound desc = could not find container \"cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264\": container with ID starting with cd7feed83af562a0cfc9337dea200fe03b2103fc50a89e1d2b601aa399ee5264 not found: ID does not exist" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.753259 4907 scope.go:117] "RemoveContainer" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.754051 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": container with ID starting with 5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666 not found: ID does not exist" containerID="5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754103 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666"} err="failed to get container status \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": rpc error: code = NotFound desc = could not find container \"5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666\": container with ID starting with 5ca98631687ca20f01ecca0f6cba02db8f2884ceb6dc1a64f3c93a4cd4964666 not found: ID does not exist" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754132 4907 scope.go:117] "RemoveContainer" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: E0127 18:21:36.754377 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": container with ID starting with a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d not found: ID does not exist" containerID="a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d" Jan 27 18:21:36 crc kubenswrapper[4907]: I0127 18:21:36.754405 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d"} err="failed to get container status \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": rpc error: code = NotFound desc = could not find container \"a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d\": container with ID starting with a566b24f826b2eb0f3e9ce007f7672a6177c5cc1af375848b4f940d8eca2032d not found: ID does not exist" Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.678296 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="a304ea0c599d1f5ce44ba39cc0f5f4d71173bd08485fbd83f5766b2c9d27a3f0" exitCode=0 Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.678344 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"a304ea0c599d1f5ce44ba39cc0f5f4d71173bd08485fbd83f5766b2c9d27a3f0"} Jan 27 18:21:37 crc kubenswrapper[4907]: I0127 18:21:37.758341 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" path="/var/lib/kubelet/pods/da2a8ac5-42c7-4326-aef8-b7f713af971d/volumes" Jan 27 18:21:39 crc kubenswrapper[4907]: I0127 18:21:39.697375 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="762fd705ef8f5be28eb2f612171e0e1d8edf46304f898d2651b11fccee0a5d69" exitCode=0 Jan 27 18:21:39 crc kubenswrapper[4907]: I0127 18:21:39.697489 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"762fd705ef8f5be28eb2f612171e0e1d8edf46304f898d2651b11fccee0a5d69"} Jan 27 18:21:40 crc kubenswrapper[4907]: I0127 18:21:40.710160 4907 generic.go:334] "Generic (PLEG): container finished" podID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerID="604979c1cc096c0e7084885f113274d201082a4fdf5604b1a1b11c4589c56282" exitCode=0 Jan 27 18:21:40 crc kubenswrapper[4907]: I0127 18:21:40.710256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"604979c1cc096c0e7084885f113274d201082a4fdf5604b1a1b11c4589c56282"} Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.076787 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.175972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.176035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.176144 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") pod \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\" (UID: \"9d2f9525-f0c4-4585-8162-0bce8fb139e9\") " Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.177183 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle" (OuterVolumeSpecName: "bundle") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.189910 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j" (OuterVolumeSpecName: "kube-api-access-mpd8j") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "kube-api-access-mpd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.210169 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util" (OuterVolumeSpecName: "util") pod "9d2f9525-f0c4-4585-8162-0bce8fb139e9" (UID: "9d2f9525-f0c4-4585-8162-0bce8fb139e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278761 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278841 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpd8j\" (UniqueName: \"kubernetes.io/projected/9d2f9525-f0c4-4585-8162-0bce8fb139e9-kube-api-access-mpd8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.278870 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d2f9525-f0c4-4585-8162-0bce8fb139e9-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" event={"ID":"9d2f9525-f0c4-4585-8162-0bce8fb139e9","Type":"ContainerDied","Data":"2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672"} Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730639 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dced305c89728ae510d177ef3f9e6689cbe765e0aafd7ccd430786723390672" Jan 27 18:21:42 crc kubenswrapper[4907]: I0127 18:21:42.730716 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.375393 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376028 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="util" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376045 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="util" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376073 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-utilities" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376082 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-utilities" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376092 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376101 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376113 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376121 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376138 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-content" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376145 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="extract-content" Jan 27 18:21:45 crc kubenswrapper[4907]: E0127 18:21:45.376167 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="pull" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376175 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="pull" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376349 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2f9525-f0c4-4585-8162-0bce8fb139e9" containerName="extract" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.376370 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2a8ac5-42c7-4326-aef8-b7f713af971d" containerName="registry-server" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.377018 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381741 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-trkrl" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381741 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.381802 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.397013 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.538471 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.640421 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.662066 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.673097 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.690080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jzlg\" (UniqueName: \"kubernetes.io/projected/a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b-kube-api-access-7jzlg\") pod \"nmstate-operator-646758c888-j277h\" (UID: \"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b\") " pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.701422 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-trkrl" Jan 27 18:21:45 crc kubenswrapper[4907]: I0127 18:21:45.710550 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" Jan 27 18:21:46 crc kubenswrapper[4907]: I0127 18:21:46.204068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j277h"] Jan 27 18:21:46 crc kubenswrapper[4907]: I0127 18:21:46.771900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" event={"ID":"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b","Type":"ContainerStarted","Data":"218fc0101b700e17b626492d02594460f79d43d669615d94942a61dedd455251"} Jan 27 18:21:49 crc kubenswrapper[4907]: I0127 18:21:49.794932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" event={"ID":"a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b","Type":"ContainerStarted","Data":"9a87c7f81bc2e2e6d5c8e8e41952cf0cf2616476d46cb2531812a9745472eb2f"} Jan 27 18:21:49 crc kubenswrapper[4907]: I0127 18:21:49.817978 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-j277h" podStartSLOduration=2.287954051 podStartE2EDuration="4.817955133s" podCreationTimestamp="2026-01-27 18:21:45 +0000 UTC" firstStartedPulling="2026-01-27 18:21:46.220585595 +0000 UTC m=+961.349868257" lastFinishedPulling="2026-01-27 18:21:48.750586727 +0000 UTC m=+963.879869339" observedRunningTime="2026-01-27 18:21:49.813242037 +0000 UTC m=+964.942524639" watchObservedRunningTime="2026-01-27 18:21:49.817955133 +0000 UTC m=+964.947237745" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.437988 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.440228 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.443887 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vdvgh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.454793 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.455906 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.490681 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.509818 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521654 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521721 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.521775 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.522466 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.522530 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" gracePeriod=600 Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.530631 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wz5df"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.532086 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.537535 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.592938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.593437 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.593483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.626787 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.627933 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.635976 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.636348 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dtwbg" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.636521 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.676455 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698814 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698897 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.698931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.699026 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.699132 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.723358 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szrm\" (UniqueName: \"kubernetes.io/projected/eeb93cd2-3631-4fad-a0d1-01232bbf9202-kube-api-access-2szrm\") pod \"nmstate-metrics-54757c584b-f7vbh\" (UID: \"eeb93cd2-3631-4fad-a0d1-01232bbf9202\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.738431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brz8\" (UniqueName: \"kubernetes.io/projected/c53f2859-15de-4c57-81ba-539c7787b649-kube-api-access-5brz8\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.738809 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53f2859-15de-4c57-81ba-539c7787b649-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5q5h2\" (UID: \"c53f2859-15de-4c57-81ba-539c7787b649\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800602 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800900 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.800975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801038 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801060 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-ovs-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801195 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-nmstate-lock\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.801550 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b5adf10-ea9c-48b5-bece-3ee8683423e3-dbus-socket\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.816037 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.827947 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.869629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njpj\" (UniqueName: \"kubernetes.io/projected/0b5adf10-ea9c-48b5-bece-3ee8683423e3-kube-api-access-4njpj\") pod \"nmstate-handler-wz5df\" (UID: \"0b5adf10-ea9c-48b5-bece-3ee8683423e3\") " pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.870214 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890852 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" exitCode=0 Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890915 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c"} Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.890956 4907 scope.go:117] "RemoveContainer" containerID="6099244ea1b816357fdc0578901eb21429999a3cda00a97382e3e7b69c0e3a0f" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907176 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.907395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.908522 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d3336bb0-ef0d-47f3-b3c7-de266154f20e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.916807 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.917803 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.922401 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3336bb0-ef0d-47f3-b3c7-de266154f20e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.954612 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rh2q\" (UniqueName: \"kubernetes.io/projected/d3336bb0-ef0d-47f3-b3c7-de266154f20e-kube-api-access-8rh2q\") pod \"nmstate-console-plugin-7754f76f8b-rhr2w\" (UID: \"d3336bb0-ef0d-47f3-b3c7-de266154f20e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.974539 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:56 crc kubenswrapper[4907]: I0127 18:21:56.984656 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113202 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113522 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113568 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113594 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113631 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113653 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.113940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216430 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216487 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216532 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216569 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216599 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.216616 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.218113 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.219348 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.220109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.221411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.222352 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.222447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.240362 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"console-65dccccccb-km74l\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.263287 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.466157 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-f7vbh"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.517302 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.524084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w"] Jan 27 18:21:57 crc kubenswrapper[4907]: W0127 18:21:57.526383 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3336bb0_ef0d_47f3_b3c7_de266154f20e.slice/crio-374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6 WatchSource:0}: Error finding container 374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6: Status 404 returned error can't find the container with id 374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6 Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.729113 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.903914 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerStarted","Data":"d8e779a538fd171e62688ea894409db54be158536da7dede33422f76801c0085"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.905774 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" event={"ID":"c53f2859-15de-4c57-81ba-539c7787b649","Type":"ContainerStarted","Data":"9c4f090d3d1772eacb63bc8cc2c4d88b15585015e82ffaded6691f0b0cab40a8"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.907196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" event={"ID":"d3336bb0-ef0d-47f3-b3c7-de266154f20e","Type":"ContainerStarted","Data":"374038ce3449747a83027e449b931d768434e338ed0e4d857d8e6a851ab36de6"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.909859 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.911417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"c51966cbf8d7df3679571580c8c1541d61f7f67978c0f9093aa8098e39e0f850"} Jan 27 18:21:57 crc kubenswrapper[4907]: I0127 18:21:57.913855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wz5df" event={"ID":"0b5adf10-ea9c-48b5-bece-3ee8683423e3","Type":"ContainerStarted","Data":"1bea98502944fad4c9e28c5ce050a438e4ff06eb2b47f57f5f0a3b24e88df233"} Jan 27 18:21:58 crc kubenswrapper[4907]: I0127 18:21:58.944647 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerStarted","Data":"73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45"} Jan 27 18:21:58 crc kubenswrapper[4907]: I0127 18:21:58.979828 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dccccccb-km74l" podStartSLOduration=2.979806679 podStartE2EDuration="2.979806679s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:21:58.974799985 +0000 UTC m=+974.104082597" watchObservedRunningTime="2026-01-27 18:21:58.979806679 +0000 UTC m=+974.109089291" Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.958790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"c3500d0cfc2ae6512c39749f3a8b9a88ce3305913cb344b157a8fa7612d61968"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.961580 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" event={"ID":"c53f2859-15de-4c57-81ba-539c7787b649","Type":"ContainerStarted","Data":"8474efa2db5d68150ca85bb9f44c99eca252b89372397e5d03f96f2817940286"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.961728 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.963050 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" event={"ID":"d3336bb0-ef0d-47f3-b3c7-de266154f20e","Type":"ContainerStarted","Data":"3b008bfec8c68ce8c0345a2757db7fc91355b901a4743fc1bc7e1f35303d6af4"} Jan 27 18:22:00 crc kubenswrapper[4907]: I0127 18:22:00.988919 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podStartSLOduration=1.919284732 podStartE2EDuration="4.988895343s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.513865559 +0000 UTC m=+972.643148171" lastFinishedPulling="2026-01-27 18:22:00.58347617 +0000 UTC m=+975.712758782" observedRunningTime="2026-01-27 18:22:00.984929748 +0000 UTC m=+976.114212350" watchObservedRunningTime="2026-01-27 18:22:00.988895343 +0000 UTC m=+976.118177955" Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.003651 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhr2w" podStartSLOduration=1.950305028 podStartE2EDuration="5.003630449s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.528026198 +0000 UTC m=+972.657308810" lastFinishedPulling="2026-01-27 18:22:00.581351579 +0000 UTC m=+975.710634231" observedRunningTime="2026-01-27 18:22:01.002732053 +0000 UTC m=+976.132014675" watchObservedRunningTime="2026-01-27 18:22:01.003630449 +0000 UTC m=+976.132913061" Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.974123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wz5df" event={"ID":"0b5adf10-ea9c-48b5-bece-3ee8683423e3","Type":"ContainerStarted","Data":"28f3efc7fa28430e988f663f6384aeab8a491fa85cf4be6af123fc71a6e82338"} Jan 27 18:22:01 crc kubenswrapper[4907]: I0127 18:22:01.993311 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wz5df" podStartSLOduration=2.369403916 podStartE2EDuration="5.99328676s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:56.982999512 +0000 UTC m=+972.112282114" lastFinishedPulling="2026-01-27 18:22:00.606882336 +0000 UTC m=+975.736164958" observedRunningTime="2026-01-27 18:22:01.987252936 +0000 UTC m=+977.116535548" watchObservedRunningTime="2026-01-27 18:22:01.99328676 +0000 UTC m=+977.122569412" Jan 27 18:22:02 crc kubenswrapper[4907]: I0127 18:22:02.985796 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:22:03 crc kubenswrapper[4907]: I0127 18:22:03.993001 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" event={"ID":"eeb93cd2-3631-4fad-a0d1-01232bbf9202","Type":"ContainerStarted","Data":"4bc09389b686797c47fe71efee2c090f86632624510af7f22d52c1d4d4e555cf"} Jan 27 18:22:04 crc kubenswrapper[4907]: I0127 18:22:04.021917 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-f7vbh" podStartSLOduration=1.877805853 podStartE2EDuration="8.021896226s" podCreationTimestamp="2026-01-27 18:21:56 +0000 UTC" firstStartedPulling="2026-01-27 18:21:57.454585446 +0000 UTC m=+972.583868058" lastFinishedPulling="2026-01-27 18:22:03.598675819 +0000 UTC m=+978.727958431" observedRunningTime="2026-01-27 18:22:04.01544283 +0000 UTC m=+979.144725442" watchObservedRunningTime="2026-01-27 18:22:04.021896226 +0000 UTC m=+979.151178848" Jan 27 18:22:06 crc kubenswrapper[4907]: I0127 18:22:06.905339 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.263780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.264000 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.270286 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.718112 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.720647 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.738829 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827535 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.827813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.929328 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.930985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.931063 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:07 crc kubenswrapper[4907]: I0127 18:22:07.954600 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"certified-operators-wntx4\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.027890 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.079468 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.157695 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:08 crc kubenswrapper[4907]: I0127 18:22:08.724834 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:08 crc kubenswrapper[4907]: W0127 18:22:08.727634 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe6c326_a67b_4381_bdfa_8716d5caf5c8.slice/crio-9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a WatchSource:0}: Error finding container 9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a: Status 404 returned error can't find the container with id 9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028168 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" exitCode=0 Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf"} Jan 27 18:22:09 crc kubenswrapper[4907]: I0127 18:22:09.028276 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a"} Jan 27 18:22:10 crc kubenswrapper[4907]: I0127 18:22:10.040949 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.055052 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" exitCode=0 Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.055176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} Jan 27 18:22:11 crc kubenswrapper[4907]: I0127 18:22:11.058093 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:22:12 crc kubenswrapper[4907]: I0127 18:22:12.068112 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerStarted","Data":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} Jan 27 18:22:12 crc kubenswrapper[4907]: I0127 18:22:12.093014 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wntx4" podStartSLOduration=2.668965401 podStartE2EDuration="5.092992722s" podCreationTimestamp="2026-01-27 18:22:07 +0000 UTC" firstStartedPulling="2026-01-27 18:22:09.029891618 +0000 UTC m=+984.159174230" lastFinishedPulling="2026-01-27 18:22:11.453918929 +0000 UTC m=+986.583201551" observedRunningTime="2026-01-27 18:22:12.090743047 +0000 UTC m=+987.220025699" watchObservedRunningTime="2026-01-27 18:22:12.092992722 +0000 UTC m=+987.222275344" Jan 27 18:22:16 crc kubenswrapper[4907]: I0127 18:22:16.833078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.079906 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.081818 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:18 crc kubenswrapper[4907]: I0127 18:22:18.155066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:19 crc kubenswrapper[4907]: I0127 18:22:19.165476 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:19 crc kubenswrapper[4907]: I0127 18:22:19.212516 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.157027 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wntx4" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" containerID="cri-o://5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" gracePeriod=2 Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.641421 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.784847 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") pod \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\" (UID: \"8fe6c326-a67b-4381-bdfa-8716d5caf5c8\") " Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.785916 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities" (OuterVolumeSpecName: "utilities") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.790971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8" (OuterVolumeSpecName: "kube-api-access-q4lz8") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "kube-api-access-q4lz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.854030 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fe6c326-a67b-4381-bdfa-8716d5caf5c8" (UID: "8fe6c326-a67b-4381-bdfa-8716d5caf5c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887173 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4lz8\" (UniqueName: \"kubernetes.io/projected/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-kube-api-access-q4lz8\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887222 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:21 crc kubenswrapper[4907]: I0127 18:22:21.887232 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fe6c326-a67b-4381-bdfa-8716d5caf5c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164702 4907 generic.go:334] "Generic (PLEG): container finished" podID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" exitCode=0 Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164743 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wntx4" event={"ID":"8fe6c326-a67b-4381-bdfa-8716d5caf5c8","Type":"ContainerDied","Data":"9c80eee0c7257f2a02a1afa3c67d9d6c0ad39b5e91e7f3d942eed4db94052e1a"} Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164792 4907 scope.go:117] "RemoveContainer" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.164793 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wntx4" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.191086 4907 scope.go:117] "RemoveContainer" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.214529 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.218291 4907 scope.go:117] "RemoveContainer" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.220859 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wntx4"] Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.259153 4907 scope.go:117] "RemoveContainer" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.263954 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": container with ID starting with 5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97 not found: ID does not exist" containerID="5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.264210 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97"} err="failed to get container status \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": rpc error: code = NotFound desc = could not find container \"5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97\": container with ID starting with 5754ecf1694fe7028bfd63e9421cbee9ec420226a7d79f6b5c1521e988d11a97 not found: ID does not exist" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.264319 4907 scope.go:117] "RemoveContainer" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.267086 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": container with ID starting with cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e not found: ID does not exist" containerID="cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267133 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e"} err="failed to get container status \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": rpc error: code = NotFound desc = could not find container \"cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e\": container with ID starting with cb9c2be3c95786114ce951472d4040627e9011efa2e66ae088ee4ecf8dbe1f5e not found: ID does not exist" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267159 4907 scope.go:117] "RemoveContainer" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: E0127 18:22:22.267788 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": container with ID starting with fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf not found: ID does not exist" containerID="fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf" Jan 27 18:22:22 crc kubenswrapper[4907]: I0127 18:22:22.267813 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf"} err="failed to get container status \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": rpc error: code = NotFound desc = could not find container \"fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf\": container with ID starting with fe7ca101d8975396c1d2b98029054696d23ade03e47c89370669171740afcacf not found: ID does not exist" Jan 27 18:22:23 crc kubenswrapper[4907]: I0127 18:22:23.760808 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" path="/var/lib/kubelet/pods/8fe6c326-a67b-4381-bdfa-8716d5caf5c8/volumes" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.255505 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b447cd8-v5z5k" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" containerID="cri-o://7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" gracePeriod=15 Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.774070 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b447cd8-v5z5k_19ce08bb-03eb-4088-9b1a-4d42adedf584/console/0.log" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.774541 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814106 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.814225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815295 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca" (OuterVolumeSpecName: "service-ca") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815381 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815432 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815457 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815487 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") pod \"19ce08bb-03eb-4088-9b1a-4d42adedf584\" (UID: \"19ce08bb-03eb-4088-9b1a-4d42adedf584\") " Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.815945 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config" (OuterVolumeSpecName: "console-config") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816302 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816315 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816311 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.816405 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.821789 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p" (OuterVolumeSpecName: "kube-api-access-h8s9p") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "kube-api-access-h8s9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.822794 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.823936 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "19ce08bb-03eb-4088-9b1a-4d42adedf584" (UID: "19ce08bb-03eb-4088-9b1a-4d42adedf584"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918022 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8s9p\" (UniqueName: \"kubernetes.io/projected/19ce08bb-03eb-4088-9b1a-4d42adedf584-kube-api-access-h8s9p\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918074 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918094 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918109 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ce08bb-03eb-4088-9b1a-4d42adedf584-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:33 crc kubenswrapper[4907]: I0127 18:22:33.918124 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ce08bb-03eb-4088-9b1a-4d42adedf584-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256475 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b447cd8-v5z5k_19ce08bb-03eb-4088-9b1a-4d42adedf584/console/0.log" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256526 4907 generic.go:334] "Generic (PLEG): container finished" podID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" exitCode=2 Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256557 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerDied","Data":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256609 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b447cd8-v5z5k" event={"ID":"19ce08bb-03eb-4088-9b1a-4d42adedf584","Type":"ContainerDied","Data":"d22d0be7c5012debcbe1ac6b1b934a7244865eb06d8f858be9fb3384ddfdb6a5"} Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256625 4907 scope.go:117] "RemoveContainer" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.256650 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b447cd8-v5z5k" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.277158 4907 scope.go:117] "RemoveContainer" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: E0127 18:22:34.277633 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": container with ID starting with 7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4 not found: ID does not exist" containerID="7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.277688 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4"} err="failed to get container status \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": rpc error: code = NotFound desc = could not find container \"7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4\": container with ID starting with 7394b338b980b94219824a5c2f7c8bf0b50c8e07a3f6fa298e407cfe438e49f4 not found: ID does not exist" Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.287692 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:34 crc kubenswrapper[4907]: I0127 18:22:34.296368 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b447cd8-v5z5k"] Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.695765 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696417 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-content" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696434 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-content" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696453 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696460 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696476 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-utilities" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696485 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="extract-utilities" Jan 27 18:22:35 crc kubenswrapper[4907]: E0127 18:22:35.696496 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696503 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" containerName="console" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.696701 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe6c326-a67b-4381-bdfa-8716d5caf5c8" containerName="registry-server" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.697964 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.699877 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.719347 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745825 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745901 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.745958 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.759930 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ce08bb-03eb-4088-9b1a-4d42adedf584" path="/var/lib/kubelet/pods/19ce08bb-03eb-4088-9b1a-4d42adedf584/volumes" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.847917 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.848206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.848255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:35 crc kubenswrapper[4907]: I0127 18:22:35.866533 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:36 crc kubenswrapper[4907]: I0127 18:22:36.012735 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:36 crc kubenswrapper[4907]: I0127 18:22:36.491872 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj"] Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286600 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="48edaa40224ebcfb864021e626ee6dc1bc7bb660ac9246fbc606d9b1c024fdba" exitCode=0 Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"48edaa40224ebcfb864021e626ee6dc1bc7bb660ac9246fbc606d9b1c024fdba"} Jan 27 18:22:37 crc kubenswrapper[4907]: I0127 18:22:37.286906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerStarted","Data":"a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48"} Jan 27 18:22:39 crc kubenswrapper[4907]: I0127 18:22:39.306215 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="e1d8da94dc92c4d5492d43b230e90d21bf5d1cb1385a8d732758fa209550dec9" exitCode=0 Jan 27 18:22:39 crc kubenswrapper[4907]: I0127 18:22:39.306318 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"e1d8da94dc92c4d5492d43b230e90d21bf5d1cb1385a8d732758fa209550dec9"} Jan 27 18:22:40 crc kubenswrapper[4907]: I0127 18:22:40.314271 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerID="06559d3f4e691c90b6f22cf90945852e002676c5b31a8a9cc32461047b73fa73" exitCode=0 Jan 27 18:22:40 crc kubenswrapper[4907]: I0127 18:22:40.314324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"06559d3f4e691c90b6f22cf90945852e002676c5b31a8a9cc32461047b73fa73"} Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.661448 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735677 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735837 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.735865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") pod \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\" (UID: \"bc3f86b6-0741-4ef9-9244-fc9378289ec2\") " Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.737952 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle" (OuterVolumeSpecName: "bundle") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.745015 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs" (OuterVolumeSpecName: "kube-api-access-s2cfs") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "kube-api-access-s2cfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.755063 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util" (OuterVolumeSpecName: "util") pod "bc3f86b6-0741-4ef9-9244-fc9378289ec2" (UID: "bc3f86b6-0741-4ef9-9244-fc9378289ec2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838378 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838419 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc3f86b6-0741-4ef9-9244-fc9378289ec2-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:41 crc kubenswrapper[4907]: I0127 18:22:41.838428 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cfs\" (UniqueName: \"kubernetes.io/projected/bc3f86b6-0741-4ef9-9244-fc9378289ec2-kube-api-access-s2cfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332849 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" event={"ID":"bc3f86b6-0741-4ef9-9244-fc9378289ec2","Type":"ContainerDied","Data":"a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48"} Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332908 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e8e98afdaec1ff30c265a471351153992b6d0397afaaeaa80cc63edc7c5d48" Jan 27 18:22:42 crc kubenswrapper[4907]: I0127 18:22:42.332945 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424024 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424695 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="util" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424707 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="util" Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424722 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424728 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: E0127 18:22:50.424739 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="pull" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424745 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="pull" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.424876 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3f86b6-0741-4ef9-9244-fc9378289ec2" containerName="extract" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.425366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427379 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427749 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427798 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.427960 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.428389 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bbvxf" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.459343 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492721 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492812 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.492872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594236 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.594359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.601280 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.609965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a776a10-0883-468e-a8d3-087ca6429b1b-webhook-cert\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.620094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85tjb\" (UniqueName: \"kubernetes.io/projected/9a776a10-0883-468e-a8d3-087ca6429b1b-kube-api-access-85tjb\") pod \"metallb-operator-controller-manager-6858498495-rcqbh\" (UID: \"9a776a10-0883-468e-a8d3-087ca6429b1b\") " pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.748122 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.762875 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.763971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.765694 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kvf5m" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.765924 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.772536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.776951 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801482 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801602 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.801726 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.903967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.911330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-webhook-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.922097 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/202ff14a-3733-4ccf-8202-94fac75bdfc4-apiservice-cert\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:50 crc kubenswrapper[4907]: I0127 18:22:50.924808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5shf6\" (UniqueName: \"kubernetes.io/projected/202ff14a-3733-4ccf-8202-94fac75bdfc4-kube-api-access-5shf6\") pod \"metallb-operator-webhook-server-548b7f8fd-7zpsk\" (UID: \"202ff14a-3733-4ccf-8202-94fac75bdfc4\") " pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.141765 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.203774 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6858498495-rcqbh"] Jan 27 18:22:51 crc kubenswrapper[4907]: W0127 18:22:51.212409 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a776a10_0883_468e_a8d3_087ca6429b1b.slice/crio-66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09 WatchSource:0}: Error finding container 66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09: Status 404 returned error can't find the container with id 66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09 Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.415470 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"66d454e55c35644fb997939f0070dc3732d0468b9bfc003dbde9b40730a67f09"} Jan 27 18:22:51 crc kubenswrapper[4907]: I0127 18:22:51.593616 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk"] Jan 27 18:22:51 crc kubenswrapper[4907]: W0127 18:22:51.596542 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202ff14a_3733_4ccf_8202_94fac75bdfc4.slice/crio-efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe WatchSource:0}: Error finding container efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe: Status 404 returned error can't find the container with id efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe Jan 27 18:22:52 crc kubenswrapper[4907]: I0127 18:22:52.425017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" event={"ID":"202ff14a-3733-4ccf-8202-94fac75bdfc4","Type":"ContainerStarted","Data":"efa2a1f4027cd7092647e7f6e0286c3c8c5d2dab8865341a5c5faef2ea1b0cfe"} Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.454900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9"} Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.455267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:22:55 crc kubenswrapper[4907]: I0127 18:22:55.475293 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podStartSLOduration=2.467301732 podStartE2EDuration="5.475273742s" podCreationTimestamp="2026-01-27 18:22:50 +0000 UTC" firstStartedPulling="2026-01-27 18:22:51.216843426 +0000 UTC m=+1026.346126038" lastFinishedPulling="2026-01-27 18:22:54.224815436 +0000 UTC m=+1029.354098048" observedRunningTime="2026-01-27 18:22:55.471498413 +0000 UTC m=+1030.600781035" watchObservedRunningTime="2026-01-27 18:22:55.475273742 +0000 UTC m=+1030.604556354" Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.469494 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" event={"ID":"202ff14a-3733-4ccf-8202-94fac75bdfc4","Type":"ContainerStarted","Data":"8b725d1a8516b5a78160938808b4596cc405881f6830ed402500ba20d107018a"} Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.470042 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:22:57 crc kubenswrapper[4907]: I0127 18:22:57.494777 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podStartSLOduration=2.551671719 podStartE2EDuration="7.494760065s" podCreationTimestamp="2026-01-27 18:22:50 +0000 UTC" firstStartedPulling="2026-01-27 18:22:51.599789249 +0000 UTC m=+1026.729071871" lastFinishedPulling="2026-01-27 18:22:56.542877605 +0000 UTC m=+1031.672160217" observedRunningTime="2026-01-27 18:22:57.490818671 +0000 UTC m=+1032.620101293" watchObservedRunningTime="2026-01-27 18:22:57.494760065 +0000 UTC m=+1032.624042677" Jan 27 18:23:11 crc kubenswrapper[4907]: I0127 18:23:11.146696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" Jan 27 18:23:30 crc kubenswrapper[4907]: I0127 18:23:30.751940 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.615921 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-csdnr"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.620241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.623009 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.623258 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xkjzk" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.624625 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.625293 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.625614 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.627254 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.659797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.718580 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-597cv"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.719751 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725022 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-f6f7l" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725183 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725306 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.725566 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726789 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726874 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726922 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.726997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.732372 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.733816 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.735698 4907 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.756195 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828827 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828879 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828898 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828934 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828951 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828970 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.828993 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829180 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829443 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829467 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-sockets\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-reloader\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-conf\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.829935 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.830684 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a1b45eb-7bdd-4172-99f0-b74eabce028d-frr-startup\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.852691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd967d05-2ecd-4578-9c41-22e36ff088c1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.855147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a1b45eb-7bdd-4172-99f0-b74eabce028d-metrics-certs\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.856285 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9kg\" (UniqueName: \"kubernetes.io/projected/dd967d05-2ecd-4578-9c41-22e36ff088c1-kube-api-access-fw9kg\") pod \"frr-k8s-webhook-server-7df86c4f6c-n9qqt\" (UID: \"dd967d05-2ecd-4578-9c41-22e36ff088c1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.856527 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t6h9\" (UniqueName: \"kubernetes.io/projected/3a1b45eb-7bdd-4172-99f0-b74eabce028d-kube-api-access-4t6h9\") pod \"frr-k8s-csdnr\" (UID: \"3a1b45eb-7bdd-4172-99f0-b74eabce028d\") " pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931261 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931352 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932301 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metallb-excludel2\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.931382 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932406 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: E0127 18:23:31.932511 4907 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:23:31 crc kubenswrapper[4907]: E0127 18:23:31.932580 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist podName:aa958bdc-32c5-4e9f-841e-7427fdb87b31 nodeName:}" failed. No retries permitted until 2026-01-27 18:23:32.432544229 +0000 UTC m=+1067.561826841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist") pod "speaker-597cv" (UID: "aa958bdc-32c5-4e9f-841e-7427fdb87b31") : secret "metallb-memberlist" not found Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932678 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932722 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.932759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.935774 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-metrics-certs\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.936275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-cert\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.937290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ea123ce-4328-4379-8310-dbfff15acfbf-metrics-certs\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.943913 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.953523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.957343 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r44t\" (UniqueName: \"kubernetes.io/projected/2ea123ce-4328-4379-8310-dbfff15acfbf-kube-api-access-6r44t\") pod \"controller-6968d8fdc4-zfszb\" (UID: \"2ea123ce-4328-4379-8310-dbfff15acfbf\") " pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:31 crc kubenswrapper[4907]: I0127 18:23:31.963247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmdv\" (UniqueName: \"kubernetes.io/projected/aa958bdc-32c5-4e9f-841e-7427fdb87b31-kube-api-access-bqmdv\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.054176 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.436997 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt"] Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.439791 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:32 crc kubenswrapper[4907]: E0127 18:23:32.439970 4907 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:23:32 crc kubenswrapper[4907]: E0127 18:23:32.440053 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist podName:aa958bdc-32c5-4e9f-841e-7427fdb87b31 nodeName:}" failed. No retries permitted until 2026-01-27 18:23:33.440030278 +0000 UTC m=+1068.569312890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist") pod "speaker-597cv" (UID: "aa958bdc-32c5-4e9f-841e-7427fdb87b31") : secret "metallb-memberlist" not found Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.490185 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zfszb"] Jan 27 18:23:32 crc kubenswrapper[4907]: W0127 18:23:32.496065 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea123ce_4328_4379_8310_dbfff15acfbf.slice/crio-9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108 WatchSource:0}: Error finding container 9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108: Status 404 returned error can't find the container with id 9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108 Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.772991 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" event={"ID":"dd967d05-2ecd-4578-9c41-22e36ff088c1","Type":"ContainerStarted","Data":"77e378198fdb5d60bcf337a9a0f6ebad022f22c16fef0f7c9a8be0b52d275a12"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.774000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"362f1edb8a8b542e8b216720056c5f4b701ab36658be615fb2717a8e56ff9554"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.776130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"40cd7a5d80d218271bea8717a319835353921836aafb58ed3d0ec0874ec2a345"} Jan 27 18:23:32 crc kubenswrapper[4907]: I0127 18:23:32.776182 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"9851631cf8b40411b75bd18c012118b14bc8c3f286208c60cb6605a7467f3108"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.464720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.479053 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa958bdc-32c5-4e9f-841e-7427fdb87b31-memberlist\") pod \"speaker-597cv\" (UID: \"aa958bdc-32c5-4e9f-841e-7427fdb87b31\") " pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.541415 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-597cv" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.809453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zfszb" event={"ID":"2ea123ce-4328-4379-8310-dbfff15acfbf","Type":"ContainerStarted","Data":"b104306e13369ae74dcb61d58fc4b9e245d2a21e0102447b84cbdab27c73428e"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.810646 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.812987 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"41ab4c3ed7c09c2faaf72bc84c3c6abd7cb0fb785e2992c69cf2391343255b42"} Jan 27 18:23:33 crc kubenswrapper[4907]: I0127 18:23:33.837239 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zfszb" podStartSLOduration=2.837222006 podStartE2EDuration="2.837222006s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:23:33.834418906 +0000 UTC m=+1068.963701588" watchObservedRunningTime="2026-01-27 18:23:33.837222006 +0000 UTC m=+1068.966504618" Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"5a0b72a1110ec8da615bc2ef5523a765fded0866c290167b1cc95c9f32799cfb"} Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-597cv" event={"ID":"aa958bdc-32c5-4e9f-841e-7427fdb87b31","Type":"ContainerStarted","Data":"75e7123b6c18492aa2931147d03564faca2febccbc3878b7adf849854dd0e818"} Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.832772 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-597cv" Jan 27 18:23:34 crc kubenswrapper[4907]: I0127 18:23:34.857401 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-597cv" podStartSLOduration=3.857361873 podStartE2EDuration="3.857361873s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:23:34.850844426 +0000 UTC m=+1069.980127038" watchObservedRunningTime="2026-01-27 18:23:34.857361873 +0000 UTC m=+1069.986644485" Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.897734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" event={"ID":"dd967d05-2ecd-4578-9c41-22e36ff088c1","Type":"ContainerStarted","Data":"4731f805fb1c5df1bac62dacf64b38b7d6a53c73263c1e42e7b0f4105bbfff5d"} Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.898239 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.899495 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="ed6bf069421f22bffc1737f2704c0195e297d0c41ec17739d12366b076e8edee" exitCode=0 Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.899519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"ed6bf069421f22bffc1737f2704c0195e297d0c41ec17739d12366b076e8edee"} Jan 27 18:23:40 crc kubenswrapper[4907]: I0127 18:23:40.921852 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podStartSLOduration=2.382870139 podStartE2EDuration="9.921836119s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="2026-01-27 18:23:32.441506701 +0000 UTC m=+1067.570789313" lastFinishedPulling="2026-01-27 18:23:39.980472681 +0000 UTC m=+1075.109755293" observedRunningTime="2026-01-27 18:23:40.920184311 +0000 UTC m=+1076.049466973" watchObservedRunningTime="2026-01-27 18:23:40.921836119 +0000 UTC m=+1076.051118731" Jan 27 18:23:41 crc kubenswrapper[4907]: I0127 18:23:41.919850 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="3f6ebcb60e75c81fab0a67c08ec59f5b4352844736add1383a32c0735642654f" exitCode=0 Jan 27 18:23:41 crc kubenswrapper[4907]: I0127 18:23:41.920748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"3f6ebcb60e75c81fab0a67c08ec59f5b4352844736add1383a32c0735642654f"} Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.059612 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zfszb" Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.932071 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="be05aa27079965e70410c1b2999f5d9b87c2d92d6a2c9bfec658c5af1d68ffee" exitCode=0 Jan 27 18:23:42 crc kubenswrapper[4907]: I0127 18:23:42.932125 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"be05aa27079965e70410c1b2999f5d9b87c2d92d6a2c9bfec658c5af1d68ffee"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.545989 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-597cv" Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958259 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"0dd0c024d285007081e7505fc159f97547e1723ffb1c8c7f43a625cf76b85def"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958307 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"b02b3f8eaa54b412861cb429e240b24487a525470003342b463ac187f4ff4975"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"82d338fa66a1b5f04505531c8c91400b8ffb7774a88ad8e48888fae516c073ae"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958334 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} Jan 27 18:23:43 crc kubenswrapper[4907]: I0127 18:23:43.958345 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"6b21e7a354a1cf3b780d112ae0f5ebea2fdedd8a627d0ec104d29ce320c5bbbf"} Jan 27 18:23:44 crc kubenswrapper[4907]: I0127 18:23:44.976172 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"0c8ab571a6f468a591b9f8d3070624c93958577ecf6dbabfe9900f1ee9680097"} Jan 27 18:23:44 crc kubenswrapper[4907]: I0127 18:23:44.976395 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:45 crc kubenswrapper[4907]: I0127 18:23:45.018109 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-csdnr" podStartSLOduration=6.178597889 podStartE2EDuration="14.018079027s" podCreationTimestamp="2026-01-27 18:23:31 +0000 UTC" firstStartedPulling="2026-01-27 18:23:32.110122264 +0000 UTC m=+1067.239404876" lastFinishedPulling="2026-01-27 18:23:39.949603402 +0000 UTC m=+1075.078886014" observedRunningTime="2026-01-27 18:23:45.007701468 +0000 UTC m=+1080.136984120" watchObservedRunningTime="2026-01-27 18:23:45.018079027 +0000 UTC m=+1080.147361679" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.621888 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.623777 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628088 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628118 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.628147 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ds6m6" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.660689 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.694491 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.795801 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.815208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"openstack-operator-index-sft9m\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.943224 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.946071 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:46 crc kubenswrapper[4907]: I0127 18:23:46.986047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:23:47 crc kubenswrapper[4907]: I0127 18:23:47.414397 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:47 crc kubenswrapper[4907]: W0127 18:23:47.429907 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869bd25c_e49d_4825_9020_af568185847c.slice/crio-1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb WatchSource:0}: Error finding container 1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb: Status 404 returned error can't find the container with id 1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb Jan 27 18:23:48 crc kubenswrapper[4907]: I0127 18:23:48.008516 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerStarted","Data":"1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb"} Jan 27 18:23:49 crc kubenswrapper[4907]: I0127 18:23:49.988718 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.025310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerStarted","Data":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.041898 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sft9m" podStartSLOduration=1.745560766 podStartE2EDuration="4.041881115s" podCreationTimestamp="2026-01-27 18:23:46 +0000 UTC" firstStartedPulling="2026-01-27 18:23:47.432509638 +0000 UTC m=+1082.561792240" lastFinishedPulling="2026-01-27 18:23:49.728829977 +0000 UTC m=+1084.858112589" observedRunningTime="2026-01-27 18:23:50.041217756 +0000 UTC m=+1085.170500408" watchObservedRunningTime="2026-01-27 18:23:50.041881115 +0000 UTC m=+1085.171163737" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.596699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.597628 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.625775 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.672330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.775162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.807492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbf9\" (UniqueName: \"kubernetes.io/projected/0a849662-db42-42f0-9317-eb3714b775d0-kube-api-access-mxbf9\") pod \"openstack-operator-index-xc2fp\" (UID: \"0a849662-db42-42f0-9317-eb3714b775d0\") " pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:50 crc kubenswrapper[4907]: I0127 18:23:50.930484 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.036004 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sft9m" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" containerID="cri-o://5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" gracePeriod=2 Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.420463 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xc2fp"] Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.441271 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.490747 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") pod \"869bd25c-e49d-4825-9020-af568185847c\" (UID: \"869bd25c-e49d-4825-9020-af568185847c\") " Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.500356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk" (OuterVolumeSpecName: "kube-api-access-2dgfk") pod "869bd25c-e49d-4825-9020-af568185847c" (UID: "869bd25c-e49d-4825-9020-af568185847c"). InnerVolumeSpecName "kube-api-access-2dgfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.593539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dgfk\" (UniqueName: \"kubernetes.io/projected/869bd25c-e49d-4825-9020-af568185847c-kube-api-access-2dgfk\") on node \"crc\" DevicePath \"\"" Jan 27 18:23:51 crc kubenswrapper[4907]: I0127 18:23:51.962414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045880 4907 generic.go:334] "Generic (PLEG): container finished" podID="869bd25c-e49d-4825-9020-af568185847c" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" exitCode=0 Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045940 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sft9m" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.045979 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerDied","Data":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.046013 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sft9m" event={"ID":"869bd25c-e49d-4825-9020-af568185847c","Type":"ContainerDied","Data":"1d82158e4127f026416936331ce8371f5ede3e048a89f638698fac1b487076bb"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.046034 4907 scope.go:117] "RemoveContainer" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.048115 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xc2fp" event={"ID":"0a849662-db42-42f0-9317-eb3714b775d0","Type":"ContainerStarted","Data":"7a3ddecc4666a1edd6e6be0a48822a7458971e5e1454d6da56d6ff456c68ae08"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.048158 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xc2fp" event={"ID":"0a849662-db42-42f0-9317-eb3714b775d0","Type":"ContainerStarted","Data":"abfac714ead85d81126d896f8c3f75fd9b29184b3cec584eb875cc9abd336e78"} Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.068786 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.069302 4907 scope.go:117] "RemoveContainer" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: E0127 18:23:52.069793 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": container with ID starting with 5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc not found: ID does not exist" containerID="5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.069847 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc"} err="failed to get container status \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": rpc error: code = NotFound desc = could not find container \"5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc\": container with ID starting with 5e3033cb4962e92259798ec8c764d075853d5e895cfb4260afaad0ea03ca33dc not found: ID does not exist" Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.076129 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sft9m"] Jan 27 18:23:52 crc kubenswrapper[4907]: I0127 18:23:52.080529 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xc2fp" podStartSLOduration=2.015865799 podStartE2EDuration="2.080510101s" podCreationTimestamp="2026-01-27 18:23:50 +0000 UTC" firstStartedPulling="2026-01-27 18:23:51.431124914 +0000 UTC m=+1086.560407526" lastFinishedPulling="2026-01-27 18:23:51.495769206 +0000 UTC m=+1086.625051828" observedRunningTime="2026-01-27 18:23:52.076104574 +0000 UTC m=+1087.205387186" watchObservedRunningTime="2026-01-27 18:23:52.080510101 +0000 UTC m=+1087.209792713" Jan 27 18:23:53 crc kubenswrapper[4907]: I0127 18:23:53.764010 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869bd25c-e49d-4825-9020-af568185847c" path="/var/lib/kubelet/pods/869bd25c-e49d-4825-9020-af568185847c/volumes" Jan 27 18:23:56 crc kubenswrapper[4907]: I0127 18:23:56.521393 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:23:56 crc kubenswrapper[4907]: I0127 18:23:56.521813 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.931801 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.932356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:00 crc kubenswrapper[4907]: I0127 18:24:00.966261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:01 crc kubenswrapper[4907]: I0127 18:24:01.185278 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xc2fp" Jan 27 18:24:01 crc kubenswrapper[4907]: I0127 18:24:01.949364 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-csdnr" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.476057 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:03 crc kubenswrapper[4907]: E0127 18:24:03.477840 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.477855 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.478175 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="869bd25c-e49d-4825-9020-af568185847c" containerName="registry-server" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.482238 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.486918 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.491452 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hksc6" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622069 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622146 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.622250 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.724676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.724792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.725057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.726022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.726218 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.763291 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:03 crc kubenswrapper[4907]: I0127 18:24:03.818389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:04 crc kubenswrapper[4907]: I0127 18:24:04.335209 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd"] Jan 27 18:24:04 crc kubenswrapper[4907]: W0127 18:24:04.339629 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e27b41_8fcc_441c_a1cd_0cfedddea164.slice/crio-8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7 WatchSource:0}: Error finding container 8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7: Status 404 returned error can't find the container with id 8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7 Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176416 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="cac48695ec0c97af9ca5a93963057747a0b5161e5d0e7a0ba604b4976826d315" exitCode=0 Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176545 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"cac48695ec0c97af9ca5a93963057747a0b5161e5d0e7a0ba604b4976826d315"} Jan 27 18:24:05 crc kubenswrapper[4907]: I0127 18:24:05.176935 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerStarted","Data":"8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7"} Jan 27 18:24:06 crc kubenswrapper[4907]: I0127 18:24:06.190492 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="c3c0a8d8a060e5f546963619abcca744668071adda6be559b5e994cea6ef285a" exitCode=0 Jan 27 18:24:06 crc kubenswrapper[4907]: I0127 18:24:06.190544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"c3c0a8d8a060e5f546963619abcca744668071adda6be559b5e994cea6ef285a"} Jan 27 18:24:07 crc kubenswrapper[4907]: I0127 18:24:07.206289 4907 generic.go:334] "Generic (PLEG): container finished" podID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerID="ba21fe2dffcf5dfcdf72dc7ae8756238536aea063b6110845e724fb7077afa64" exitCode=0 Jan 27 18:24:07 crc kubenswrapper[4907]: I0127 18:24:07.206355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"ba21fe2dffcf5dfcdf72dc7ae8756238536aea063b6110845e724fb7077afa64"} Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.666185 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.738273 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") pod \"31e27b41-8fcc-441c-a1cd-0cfedddea164\" (UID: \"31e27b41-8fcc-441c-a1cd-0cfedddea164\") " Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.739203 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle" (OuterVolumeSpecName: "bundle") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.748904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc" (OuterVolumeSpecName: "kube-api-access-sjxcc") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "kube-api-access-sjxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.758750 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util" (OuterVolumeSpecName: "util") pod "31e27b41-8fcc-441c-a1cd-0cfedddea164" (UID: "31e27b41-8fcc-441c-a1cd-0cfedddea164"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840379 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxcc\" (UniqueName: \"kubernetes.io/projected/31e27b41-8fcc-441c-a1cd-0cfedddea164-kube-api-access-sjxcc\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840443 4907 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:08 crc kubenswrapper[4907]: I0127 18:24:08.840464 4907 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31e27b41-8fcc-441c-a1cd-0cfedddea164-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238066 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" event={"ID":"31e27b41-8fcc-441c-a1cd-0cfedddea164","Type":"ContainerDied","Data":"8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7"} Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238518 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8151d504dba95b694ecadc74839c0f83d79ed84da238655a28ffbd67196cc3e7" Jan 27 18:24:09 crc kubenswrapper[4907]: I0127 18:24:09.238152 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.478921 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.479943 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="util" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.479960 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="util" Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.479984 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.479990 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: E0127 18:24:15.480007 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="pull" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480014 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="pull" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480170 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e27b41-8fcc-441c-a1cd-0cfedddea164" containerName="extract" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.480767 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.483189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sf6qj" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.506601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.567072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.668147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.688101 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg56h\" (UniqueName: \"kubernetes.io/projected/f22de95d-f437-432c-917a-a08c082e02c4-kube-api-access-zg56h\") pod \"openstack-operator-controller-init-7c754559d6-wt8dc\" (UID: \"f22de95d-f437-432c-917a-a08c082e02c4\") " pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:15 crc kubenswrapper[4907]: I0127 18:24:15.800400 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:16 crc kubenswrapper[4907]: I0127 18:24:16.254781 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc"] Jan 27 18:24:16 crc kubenswrapper[4907]: W0127 18:24:16.264982 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22de95d_f437_432c_917a_a08c082e02c4.slice/crio-c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b WatchSource:0}: Error finding container c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b: Status 404 returned error can't find the container with id c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b Jan 27 18:24:16 crc kubenswrapper[4907]: I0127 18:24:16.298030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"c59e942d8efcff35b199aa0a3a1659f2a152151e19923ae79f98429cbee1524b"} Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.343308 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06"} Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.344157 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:21 crc kubenswrapper[4907]: I0127 18:24:21.402850 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podStartSLOduration=2.223204388 podStartE2EDuration="6.402828078s" podCreationTimestamp="2026-01-27 18:24:15 +0000 UTC" firstStartedPulling="2026-01-27 18:24:16.267230409 +0000 UTC m=+1111.396513021" lastFinishedPulling="2026-01-27 18:24:20.446854099 +0000 UTC m=+1115.576136711" observedRunningTime="2026-01-27 18:24:21.395220149 +0000 UTC m=+1116.524502841" watchObservedRunningTime="2026-01-27 18:24:21.402828078 +0000 UTC m=+1116.532110720" Jan 27 18:24:25 crc kubenswrapper[4907]: I0127 18:24:25.804065 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 18:24:26 crc kubenswrapper[4907]: I0127 18:24:26.521843 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:26 crc kubenswrapper[4907]: I0127 18:24:26.522152 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.351171 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.352447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.358341 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-whv2v" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.364501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.418118 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.419183 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.421191 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2t89z" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.433703 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.434855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.438546 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-twd54" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.441436 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.449068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.449694 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.458843 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.459863 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.467069 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5wmp4" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.467372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.478699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.491017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.495674 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qs7qz" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.510591 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.523024 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.531905 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h96k6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.541250 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551448 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551532 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551672 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.551702 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.564730 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.572972 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.574443 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.581314 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p795z" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.590103 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.591349 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.596460 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.596993 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-w76rv" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.599573 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.653885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.653981 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654005 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654035 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654068 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.654212 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.666051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxpl\" (UniqueName: \"kubernetes.io/projected/e6378a4c-96e5-4151-a0ca-c320fa9b667d-kube-api-access-lxxpl\") pod \"barbican-operator-controller-manager-7f86f8796f-8jsvt\" (UID: \"e6378a4c-96e5-4151-a0ca-c320fa9b667d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.678541 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.680187 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2xqv\" (UniqueName: \"kubernetes.io/projected/a05cfe48-4bf5-4199-aefa-de59259798c4-kube-api-access-l2xqv\") pod \"glance-operator-controller-manager-78fdd796fd-7hgqc\" (UID: \"a05cfe48-4bf5-4199-aefa-de59259798c4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690471 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.690851 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-whmjg" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.695277 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.700228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44ftf\" (UniqueName: \"kubernetes.io/projected/277579e8-58c3-4ad7-b902-e62f045ba8c6-kube-api-access-44ftf\") pod \"designate-operator-controller-manager-b45d7bf98-6lprh\" (UID: \"277579e8-58c3-4ad7-b902-e62f045ba8c6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.700235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slszq\" (UniqueName: \"kubernetes.io/projected/018e0dfe-5282-40d5-87db-8551645d6e02-kube-api-access-slszq\") pod \"cinder-operator-controller-manager-7478f7dbf9-nznnn\" (UID: \"018e0dfe-5282-40d5-87db-8551645d6e02\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.703067 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgpb\" (UniqueName: \"kubernetes.io/projected/e9f20d2f-16bf-49df-9c41-6fd6faa6ef67-kube-api-access-mdgpb\") pod \"heat-operator-controller-manager-594c8c9d5d-4nlx7\" (UID: \"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.740103 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.741564 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.744134 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8dgzx" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.755999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757524 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757646 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757793 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.757915 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.758006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: E0127 18:24:46.758642 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:46 crc kubenswrapper[4907]: E0127 18:24:46.758710 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.258692446 +0000 UTC m=+1142.387975058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.765389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.779297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfgll\" (UniqueName: \"kubernetes.io/projected/7c6ac148-bc7a-4480-9155-8f78567a5070-kube-api-access-jfgll\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.790310 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8dq\" (UniqueName: \"kubernetes.io/projected/f1ed42c6-98ac-41b8-96df-24919c0f9837-kube-api-access-xr8dq\") pod \"horizon-operator-controller-manager-77d5c5b54f-b29cj\" (UID: \"f1ed42c6-98ac-41b8-96df-24919c0f9837\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.790795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klrnr\" (UniqueName: \"kubernetes.io/projected/c4a64f11-d6ef-487e-afa3-1d9bdbea9424-kube-api-access-klrnr\") pod \"ironic-operator-controller-manager-598f7747c9-hb2q7\" (UID: \"c4a64f11-d6ef-487e-afa3-1d9bdbea9424\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.795879 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.797245 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.808638 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.832612 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.838737 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.848749 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.848838 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.853745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.855784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.853800 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sgf77" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.859104 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.859244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.860358 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zxqc9" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.860850 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.864450 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.869380 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8tbmw" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.870520 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.883111 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.883304 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.889881 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.891213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2xt\" (UniqueName: \"kubernetes.io/projected/e257f81e-9460-4391-a7a5-cca3fc9230d9-kube-api-access-8l2xt\") pod \"keystone-operator-controller-manager-b8b6d4659-kjhgn\" (UID: \"e257f81e-9460-4391-a7a5-cca3fc9230d9\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.893359 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.895699 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b84cs" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.937418 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.949748 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.951930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.954325 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.954537 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7x8cp" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.959740 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960703 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.960900 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.961010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.962969 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vd9bf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.967667 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.967719 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.976337 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.977380 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.980901 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xj5jr" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.986542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfk9\" (UniqueName: \"kubernetes.io/projected/bc6ebe7e-320a-4193-8db4-3d4574ba1c3b-kube-api-access-rdfk9\") pod \"manila-operator-controller-manager-78c6999f6f-mst5f\" (UID: \"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:46 crc kubenswrapper[4907]: I0127 18:24:46.992216 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.000813 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.010351 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.018970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.036710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.037912 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.046320 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.047853 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g8xgp" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.055015 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.056362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.059351 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7hms8" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.068893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069267 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069442 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069517 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.069792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.076882 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.088499 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrm9\" (UniqueName: \"kubernetes.io/projected/a733096f-e99d-4186-8542-1d8cb16012d2-kube-api-access-fxrm9\") pod \"octavia-operator-controller-manager-5f4cd88d46-tn4d6\" (UID: \"a733096f-e99d-4186-8542-1d8cb16012d2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.102715 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nnn\" (UniqueName: \"kubernetes.io/projected/f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b-kube-api-access-v5nnn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-9t69q\" (UID: \"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.103595 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszk7\" (UniqueName: \"kubernetes.io/projected/bd2d065d-dd6e-43bc-a725-e7fe52c024b1-kube-api-access-kszk7\") pod \"nova-operator-controller-manager-7bdb645866-fnh99\" (UID: \"bd2d065d-dd6e-43bc-a725-e7fe52c024b1\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.105977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhw5d\" (UniqueName: \"kubernetes.io/projected/774ac09a-4164-4e22-9ea2-385ac4ef87eb-kube-api-access-rhw5d\") pod \"neutron-operator-controller-manager-78d58447c5-l2pdl\" (UID: \"774ac09a-4164-4e22-9ea2-385ac4ef87eb\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.121562 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.122671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.128015 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.133353 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-25mbl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.137503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.163520 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172344 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172466 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172751 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.172905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.173072 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.173254 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.673237038 +0000 UTC m=+1142.802519650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.185609 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.195009 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4p7\" (UniqueName: \"kubernetes.io/projected/f84f4e53-c1de-49a3-8435-5e4999a034fd-kube-api-access-ch4p7\") pod \"placement-operator-controller-manager-79d5ccc684-mpgzf\" (UID: \"f84f4e53-c1de-49a3-8435-5e4999a034fd\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.200036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjsn\" (UniqueName: \"kubernetes.io/projected/a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2-kube-api-access-txjsn\") pod \"ovn-operator-controller-manager-6f75f45d54-bf27l\" (UID: \"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.201486 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d58nt\" (UniqueName: \"kubernetes.io/projected/24caa967-ac26-4666-bf41-e2c4bc6ebb0f-kube-api-access-d58nt\") pod \"swift-operator-controller-manager-547cbdb99f-fljbt\" (UID: \"24caa967-ac26-4666-bf41-e2c4bc6ebb0f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.203227 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zft97\" (UniqueName: \"kubernetes.io/projected/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-kube-api-access-zft97\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.204193 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.237376 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.258034 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.261993 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.264482 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-888v4" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276026 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276601 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.276638 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.277374 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.277426 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.277409399 +0000 UTC m=+1143.406692011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.295477 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb4hm\" (UniqueName: \"kubernetes.io/projected/12b8e76f-853f-4eeb-b6c5-e77d05bec357-kube-api-access-lb4hm\") pod \"telemetry-operator-controller-manager-7567458d64-vvlhm\" (UID: \"12b8e76f-853f-4eeb-b6c5-e77d05bec357\") " pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.300474 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sgs\" (UniqueName: \"kubernetes.io/projected/7f5a8eee-f06b-4376-90d6-ff3faef0e8af-kube-api-access-r6sgs\") pod \"test-operator-controller-manager-69797bbcbd-ph8fw\" (UID: \"7f5a8eee-f06b-4376-90d6-ff3faef0e8af\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.317252 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.318673 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.319158 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.322036 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.322178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xl8wk" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.335836 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.358038 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378422 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378476 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378873 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.380295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.378454 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.389342 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.389456 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.393623 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jdtrj" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.458340 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.477636 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481944 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.481984 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.482008 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482176 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482220 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.982206768 +0000 UTC m=+1143.111489380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482827 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.482918 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:47.982890648 +0000 UTC m=+1143.112173290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.488206 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.502085 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.504411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkpm9\" (UniqueName: \"kubernetes.io/projected/ba33cbc9-9a56-4c45-8c07-19b4110e03c3-kube-api-access-fkpm9\") pod \"watcher-operator-controller-manager-564965969-wvnrt\" (UID: \"ba33cbc9-9a56-4c45-8c07-19b4110e03c3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.507128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-985jh\" (UniqueName: \"kubernetes.io/projected/7707f450-bf8d-4e84-9baa-a02bc80a0b22-kube-api-access-985jh\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.514917 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.526926 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.583663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.602942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9gk\" (UniqueName: \"kubernetes.io/projected/a4aa00b3-8a54-4f84-907d-34a73b93944f-kube-api-access-pp9gk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gfl97\" (UID: \"a4aa00b3-8a54-4f84-907d-34a73b93944f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.606422 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.662168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" event={"ID":"e6378a4c-96e5-4151-a0ca-c320fa9b667d","Type":"ContainerStarted","Data":"2a913f8c674a159c3278174d993c98323f0c917de6665906c45c075a660e2217"} Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.685448 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.685929 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.685995 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.685976678 +0000 UTC m=+1143.815259290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.691546 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.710919 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.988098 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc"] Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.991456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.991503 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992230 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992231 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992303 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.992280882 +0000 UTC m=+1144.121563564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: E0127 18:24:47.992349 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:48.992322323 +0000 UTC m=+1144.121604995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:47 crc kubenswrapper[4907]: I0127 18:24:47.994877 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.010184 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh"] Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.014068 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277579e8_58c3_4ad7_b902_e62f045ba8c6.slice/crio-a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133 WatchSource:0}: Error finding container a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133: Status 404 returned error can't find the container with id a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133 Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.015902 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a64f11_d6ef_487e_afa3_1d9bdbea9424.slice/crio-24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab WatchSource:0}: Error finding container 24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab: Status 404 returned error can't find the container with id 24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.016947 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.174055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn"] Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.180894 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode257f81e_9460_4391_a7a5_cca3fc9230d9.slice/crio-bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1 WatchSource:0}: Error finding container bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1: Status 404 returned error can't find the container with id bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1 Jan 27 18:24:48 crc kubenswrapper[4907]: W0127 18:24:48.225048 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6ebe7e_320a_4193_8db4_3d4574ba1c3b.slice/crio-6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5 WatchSource:0}: Error finding container 6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5: Status 404 returned error can't find the container with id 6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5 Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.282337 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.290712 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f"] Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.314365 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.314524 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.314598 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:50.314581726 +0000 UTC m=+1145.443864338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.676262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"6454165408aa777c375a77ca4da862571acff80692873e12cf8c1eb5aaffd0e5"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.678037 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" event={"ID":"277579e8-58c3-4ad7-b902-e62f045ba8c6","Type":"ContainerStarted","Data":"a3ea6475685af28811e8940cf9ebf7ddde5139d3407bc5c350b12ad62f451133"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.680687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"6791a1a05cf7edf0e7799004c25a16ca301d30e4fd67e266ecb7b412401354c2"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.682246 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"e095b08c4cc61a5961eb4e1cef0d113446ad75718ba0e98ec2f466b23cd4eaaf"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.683387 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" event={"ID":"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67","Type":"ContainerStarted","Data":"29416602d1c8f7ccd84c959906995921afdfacd328c4887e69f28aaf840355ac"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.684655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"24fc48a96bf904cf4a29d042caf4de21f74aae56eacb7efbb8e5a82838646fab"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.686493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"6113f37ffc7c7c00f5ec531319dc966276b9517833070b64257bac01ded176b7"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.687393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" event={"ID":"e257f81e-9460-4391-a7a5-cca3fc9230d9","Type":"ContainerStarted","Data":"bfe815c60cd3dfe7bbad22ef6425cc7275e6e4f4c9c15be7aeeb245b383493e1"} Jan 27 18:24:48 crc kubenswrapper[4907]: I0127 18:24:48.729227 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.729415 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:48 crc kubenswrapper[4907]: E0127 18:24:48.729479 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:50.729461788 +0000 UTC m=+1145.858744400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.036047 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.036098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036269 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036368 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:51.036347498 +0000 UTC m=+1146.165630190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036422 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.036658 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:51.036648867 +0000 UTC m=+1146.165931539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.071148 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.092661 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.124021 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.126742 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b8e76f_853f_4eeb_b6c5_e77d05bec357.slice/crio-d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373 WatchSource:0}: Error finding container d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373: Status 404 returned error can't find the container with id d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373 Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.151664 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.173030 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.173468 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24caa967_ac26_4666_bf41_e2c4bc6ebb0f.slice/crio-64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9 WatchSource:0}: Error finding container 64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9: Status 404 returned error can't find the container with id 64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.173716 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5a8eee_f06b_4376_90d6_ff3faef0e8af.slice/crio-8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e WatchSource:0}: Error finding container 8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e: Status 404 returned error can't find the container with id 8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.184806 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba33cbc9_9a56_4c45_8c07_19b4110e03c3.slice/crio-a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f WatchSource:0}: Error finding container a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f: Status 404 returned error can't find the container with id a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.192040 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wvnrt"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.201116 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.212034 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l"] Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.219450 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda733096f_e99d_4186_8542_1d8cb16012d2.slice/crio-c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74 WatchSource:0}: Error finding container c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74: Status 404 returned error can't find the container with id c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.220691 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84f4e53_c1de_49a3_8435_5e4999a034fd.slice/crio-fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4 WatchSource:0}: Error finding container fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4: Status 404 returned error can't find the container with id fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4 Jan 27 18:24:49 crc kubenswrapper[4907]: W0127 18:24:49.223248 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e8fa01_e75c_41bc_bfbb_affea0fcc0a2.slice/crio-91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a WatchSource:0}: Error finding container 91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a: Status 404 returned error can't find the container with id 91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.224362 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6"] Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.224870 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ch4p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-mpgzf_openstack-operators(f84f4e53-c1de-49a3-8435-5e4999a034fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.224871 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxrm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-tn4d6_openstack-operators(a733096f-e99d-4186-8542-1d8cb16012d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.226366 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.226402 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.247656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.259011 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf"] Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.697147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" event={"ID":"12b8e76f-853f-4eeb-b6c5-e77d05bec357","Type":"ContainerStarted","Data":"d0729a91e2f6ee362148963b6e4e0b678d3d4c2fa1fcb10c10db608412f97373"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.698616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"8fa5433ff75be4e5b55e22947dee7e73581602fadb6b162a91d421a823f41c4e"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.700093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"3277a2d65aa2648fa54a90b7b9c49bfd972a11487493def4251c9975a7afc309"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.701866 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" event={"ID":"24caa967-ac26-4666-bf41-e2c4bc6ebb0f","Type":"ContainerStarted","Data":"64c95de70a93abd276fa952dd89d84c9d6cb5bd492ff8b0f7f2e1b68252bb8c9"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.703015 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"7049b97a9434791020e330fe3bda0152252d9f6d0df88d2336280b23da8b7908"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.704101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" event={"ID":"ba33cbc9-9a56-4c45-8c07-19b4110e03c3","Type":"ContainerStarted","Data":"a6f6edf20880bec89b8692ae040a8904c4d39d96c85947474beb3fdf738db82f"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.705224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"c89c168165313113ef77f1cf2263012e2324d10820f794ce3d2f188a8964bbb0"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.706502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" event={"ID":"f84f4e53-c1de-49a3-8435-5e4999a034fd","Type":"ContainerStarted","Data":"fc7ebced29bd5165fd39b95a2c5585623f66a337367d56fb6d9e1914a7ec17e4"} Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.711084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"535c6c6acf6867e03495967a52c6a00758ec204249df8dc05a49e1f4680dcec3"} Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.712337 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.712565 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"c900079fe1b8e6df99dd0f776913dcbb43ae5b913f53fc6d837d87b23d9c5c74"} Jan 27 18:24:49 crc kubenswrapper[4907]: E0127 18:24:49.714186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:49 crc kubenswrapper[4907]: I0127 18:24:49.715093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" event={"ID":"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2","Type":"ContainerStarted","Data":"91df23fcd7ce3e1111963b610e6fae1a52728bd99fbbff5d6041040789140e8a"} Jan 27 18:24:50 crc kubenswrapper[4907]: I0127 18:24:50.358942 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.359328 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.359378 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:54.359361369 +0000 UTC m=+1149.488643981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.730789 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.731331 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" Jan 27 18:24:50 crc kubenswrapper[4907]: I0127 18:24:50.765994 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.766251 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:50 crc kubenswrapper[4907]: E0127 18:24:50.766358 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:54.766335812 +0000 UTC m=+1149.895618424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: I0127 18:24:51.071199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:51 crc kubenswrapper[4907]: I0127 18:24:51.071269 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071374 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071450 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071465 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:55.071445572 +0000 UTC m=+1150.200728184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:51 crc kubenswrapper[4907]: E0127 18:24:51.071535 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:24:55.071516244 +0000 UTC m=+1150.200798856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: I0127 18:24:54.390622 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.390789 4907 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.391221 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert podName:7c6ac148-bc7a-4480-9155-8f78567a5070 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:02.391202042 +0000 UTC m=+1157.520484654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert") pod "infra-operator-controller-manager-694cf4f878-mrpqf" (UID: "7c6ac148-bc7a-4480-9155-8f78567a5070") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: I0127 18:24:54.801398 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.801660 4907 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:54 crc kubenswrapper[4907]: E0127 18:24:54.801744 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert podName:8a6e2a40-e233-4dbe-9b63-0fecf3fc1487 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:02.801725568 +0000 UTC m=+1157.931008180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" (UID: "8a6e2a40-e233-4dbe-9b63-0fecf3fc1487") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: I0127 18:24:55.110170 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:55 crc kubenswrapper[4907]: I0127 18:24:55.110292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.110759 4907 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.110873 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:03.110846142 +0000 UTC m=+1158.240128774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "metrics-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.111337 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:24:55 crc kubenswrapper[4907]: E0127 18:24:55.111526 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:03.111478631 +0000 UTC m=+1158.240761243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521203 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521494 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.521548 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.522341 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.522409 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" gracePeriod=600 Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782644 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" exitCode=0 Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809"} Jan 27 18:24:56 crc kubenswrapper[4907]: I0127 18:24:56.782749 4907 scope.go:117] "RemoveContainer" containerID="6c6457fcad0aadd72b623dd84842669e5ae8a7cd9babd90c21be3d1544aa1b2c" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.027960 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.028126 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slszq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-nznnn_openstack-operators(018e0dfe-5282-40d5-87db-8551645d6e02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.029295 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" Jan 27 18:25:00 crc kubenswrapper[4907]: E0127 18:25:00.821167 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.292159 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.292421 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xr8dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-b29cj_openstack-operators(f1ed42c6-98ac-41b8-96df-24919c0f9837): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.293604 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.803300 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.803547 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdgpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-4nlx7_openstack-operators(e9f20d2f-16bf-49df-9c41-6fd6faa6ef67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.805455 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.846605 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" Jan 27 18:25:01 crc kubenswrapper[4907]: E0127 18:25:01.846765 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.330663 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.330902 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5nnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_openstack-operators(f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.332141 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.461369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.468403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c6ac148-bc7a-4480-9155-8f78567a5070-cert\") pod \"infra-operator-controller-manager-694cf4f878-mrpqf\" (UID: \"7c6ac148-bc7a-4480-9155-8f78567a5070\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.713701 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:02 crc kubenswrapper[4907]: E0127 18:25:02.846135 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.868691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:02 crc kubenswrapper[4907]: I0127 18:25:02.872578 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a6e2a40-e233-4dbe-9b63-0fecf3fc1487-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9\" (UID: \"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.074591 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.173528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.173606 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:03 crc kubenswrapper[4907]: E0127 18:25:03.173931 4907 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:25:03 crc kubenswrapper[4907]: E0127 18:25:03.174060 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs podName:7707f450-bf8d-4e84-9baa-a02bc80a0b22 nodeName:}" failed. No retries permitted until 2026-01-27 18:25:19.174026075 +0000 UTC m=+1174.303308727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs") pod "openstack-operator-controller-manager-6f954ddc5b-fjchc" (UID: "7707f450-bf8d-4e84-9baa-a02bc80a0b22") : secret "webhook-server-cert" not found Jan 27 18:25:03 crc kubenswrapper[4907]: I0127 18:25:03.183068 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-metrics-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.061170 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.061761 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44ftf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-6lprh_openstack-operators(277579e8-58c3-4ad7-b902-e62f045ba8c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.063006 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.585018 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.585194 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhw5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-l2pdl_openstack-operators(774ac09a-4164-4e22-9ea2-385ac4ef87eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.588692 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.862739 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" Jan 27 18:25:04 crc kubenswrapper[4907]: E0127 18:25:04.863067 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.609756 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.610391 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2xqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-7hgqc_openstack-operators(a05cfe48-4bf5-4199-aefa-de59259798c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.611678 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" Jan 27 18:25:07 crc kubenswrapper[4907]: E0127 18:25:07.890999 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.105895 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.106078 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6sgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-ph8fw_openstack-operators(7f5a8eee-f06b-4376-90d6-ff3faef0e8af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.108397 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.592880 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.593363 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kszk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-fnh99_openstack-operators(bd2d065d-dd6e-43bc-a725-e7fe52c024b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.594715 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.910214 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" Jan 27 18:25:09 crc kubenswrapper[4907]: E0127 18:25:09.910471 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.049747 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.049983 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-kjhgn_openstack-operators(e257f81e-9460-4391-a7a5-cca3fc9230d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.051235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131111 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131272 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.131943 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lb4hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7567458d64-vvlhm_openstack-operators(12b8e76f-853f-4eeb-b6c5-e77d05bec357): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.133176 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.588160 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.588339 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp9gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gfl97_openstack-operators(a4aa00b3-8a54-4f84-907d-34a73b93944f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.589840 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podUID="a4aa00b3-8a54-4f84-907d-34a73b93944f" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.916907 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podUID="a4aa00b3-8a54-4f84-907d-34a73b93944f" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.916926 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:52eb95a35094003a8e2a299be325dafa922cfded\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" Jan 27 18:25:10 crc kubenswrapper[4907]: E0127 18:25:10.917186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.158435 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf"] Jan 27 18:25:12 crc kubenswrapper[4907]: W0127 18:25:12.166520 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6ac148_bc7a_4480_9155_8f78567a5070.slice/crio-871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418 WatchSource:0}: Error finding container 871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418: Status 404 returned error can't find the container with id 871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418 Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.184522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9"] Jan 27 18:25:12 crc kubenswrapper[4907]: W0127 18:25:12.194825 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6e2a40_e233_4dbe_9b63_0fecf3fc1487.slice/crio-34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c WatchSource:0}: Error finding container 34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c: Status 404 returned error can't find the container with id 34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.944007 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.945645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.946076 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.946924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" event={"ID":"7c6ac148-bc7a-4480-9155-8f78567a5070","Type":"ContainerStarted","Data":"871aecd5eaaeff755984ce942134f64b3a25b7cbfb12cafbc5abb890da628418"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.948096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"34698ba8b9af47aaf6ec17818d02ed435e91b5147a9612522b6211ec96bb3c2c"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.949702 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" event={"ID":"24caa967-ac26-4666-bf41-e2c4bc6ebb0f","Type":"ContainerStarted","Data":"aaa792a6850bf30c995f2388705bd61ee5da27dcd36f052b9830b50ce66d4f57"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.949857 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.951322 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.951414 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.952886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" event={"ID":"ba33cbc9-9a56-4c45-8c07-19b4110e03c3","Type":"ContainerStarted","Data":"d8bdbbf49db6fc27563d452c8162685379dba3541a682d34067598486eb1c5f7"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.953019 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.954205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" event={"ID":"a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2","Type":"ContainerStarted","Data":"dc11e3612339e33d628cef05cf8bb6c9ba5cc25baf6be59cd4475859647a42fa"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.954309 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.955493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" event={"ID":"f84f4e53-c1de-49a3-8435-5e4999a034fd","Type":"ContainerStarted","Data":"7d08e58ef5420364403ed35c827110949685dd78c2f0d6d0a9ef915cc60cb69b"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.955636 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.956882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" event={"ID":"e6378a4c-96e5-4151-a0ca-c320fa9b667d","Type":"ContainerStarted","Data":"f91c4972af323848ffd12c798863c07bd74711b0d6bb5069960abf56894032b3"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.957146 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.958248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd"} Jan 27 18:25:12 crc kubenswrapper[4907]: I0127 18:25:12.958523 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.000263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" podStartSLOduration=4.414772637 podStartE2EDuration="27.00024021s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.545334947 +0000 UTC m=+1142.674617559" lastFinishedPulling="2026-01-27 18:25:10.13080252 +0000 UTC m=+1165.260085132" observedRunningTime="2026-01-27 18:25:13.00022878 +0000 UTC m=+1168.129511392" watchObservedRunningTime="2026-01-27 18:25:13.00024021 +0000 UTC m=+1168.129522822" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.015958 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podStartSLOduration=7.081265651 podStartE2EDuration="27.015941601s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.178273206 +0000 UTC m=+1144.307555818" lastFinishedPulling="2026-01-27 18:25:09.112949156 +0000 UTC m=+1164.242231768" observedRunningTime="2026-01-27 18:25:13.015251891 +0000 UTC m=+1168.144534503" watchObservedRunningTime="2026-01-27 18:25:13.015941601 +0000 UTC m=+1168.145224213" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.032713 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podStartSLOduration=4.476880348 podStartE2EDuration="27.032696131s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224755725 +0000 UTC m=+1144.354038337" lastFinishedPulling="2026-01-27 18:25:11.780571508 +0000 UTC m=+1166.909854120" observedRunningTime="2026-01-27 18:25:13.027755819 +0000 UTC m=+1168.157038431" watchObservedRunningTime="2026-01-27 18:25:13.032696131 +0000 UTC m=+1168.161978743" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.042304 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podStartSLOduration=6.195429004 podStartE2EDuration="27.042283836s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.188671566 +0000 UTC m=+1144.317954168" lastFinishedPulling="2026-01-27 18:25:10.035526348 +0000 UTC m=+1165.164809000" observedRunningTime="2026-01-27 18:25:13.038851538 +0000 UTC m=+1168.168134150" watchObservedRunningTime="2026-01-27 18:25:13.042283836 +0000 UTC m=+1168.171566448" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.061099 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podStartSLOduration=5.151314671 podStartE2EDuration="27.061082395s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224742595 +0000 UTC m=+1144.354025207" lastFinishedPulling="2026-01-27 18:25:11.134510319 +0000 UTC m=+1166.263792931" observedRunningTime="2026-01-27 18:25:13.058353177 +0000 UTC m=+1168.187635789" watchObservedRunningTime="2026-01-27 18:25:13.061082395 +0000 UTC m=+1168.190365007" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.072302 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podStartSLOduration=5.265536494 podStartE2EDuration="27.072288487s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.228869917 +0000 UTC m=+1143.358152519" lastFinishedPulling="2026-01-27 18:25:10.0356219 +0000 UTC m=+1165.164904512" observedRunningTime="2026-01-27 18:25:13.070707821 +0000 UTC m=+1168.199990423" watchObservedRunningTime="2026-01-27 18:25:13.072288487 +0000 UTC m=+1168.201571099" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.118642 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podStartSLOduration=4.582461017 podStartE2EDuration="27.118625616s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.224751475 +0000 UTC m=+1144.354034087" lastFinishedPulling="2026-01-27 18:25:11.760916084 +0000 UTC m=+1166.890198686" observedRunningTime="2026-01-27 18:25:13.102918885 +0000 UTC m=+1168.232201507" watchObservedRunningTime="2026-01-27 18:25:13.118625616 +0000 UTC m=+1168.247908228" Jan 27 18:25:13 crc kubenswrapper[4907]: I0127 18:25:13.120887 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podStartSLOduration=15.127269096 podStartE2EDuration="27.12087876s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.017956121 +0000 UTC m=+1143.147238733" lastFinishedPulling="2026-01-27 18:25:00.011565785 +0000 UTC m=+1155.140848397" observedRunningTime="2026-01-27 18:25:13.115387253 +0000 UTC m=+1168.244669875" watchObservedRunningTime="2026-01-27 18:25:13.12087876 +0000 UTC m=+1168.250161372" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.165478 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.321345 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.461436 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.495580 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.504209 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.609729 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.999081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" event={"ID":"e9f20d2f-16bf-49df-9c41-6fd6faa6ef67","Type":"ContainerStarted","Data":"240a72c5b52349d6b5bef7a2cbbe50b43517d40b7fec57cdf1e23e733eff2b3f"} Jan 27 18:25:17 crc kubenswrapper[4907]: I0127 18:25:17.999346 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:25:18 crc kubenswrapper[4907]: I0127 18:25:18.022445 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podStartSLOduration=3.020981698 podStartE2EDuration="32.022404444s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.999455389 +0000 UTC m=+1143.128738001" lastFinishedPulling="2026-01-27 18:25:17.000878115 +0000 UTC m=+1172.130160747" observedRunningTime="2026-01-27 18:25:18.014718463 +0000 UTC m=+1173.144001075" watchObservedRunningTime="2026-01-27 18:25:18.022404444 +0000 UTC m=+1173.151687056" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.008096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.008708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.009355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.009991 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.011114 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" event={"ID":"7c6ac148-bc7a-4480-9155-8f78567a5070","Type":"ContainerStarted","Data":"212e88fff355323ad386c5b1bf1a33363f24e77fe11d9ecb10ee883253b1232a"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.011512 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.012478 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.012867 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.014146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.014505 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.015923 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e"} Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.016256 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.028127 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podStartSLOduration=4.127233594 podStartE2EDuration="33.028110499s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.103848292 +0000 UTC m=+1144.233130904" lastFinishedPulling="2026-01-27 18:25:18.004725197 +0000 UTC m=+1173.134007809" observedRunningTime="2026-01-27 18:25:19.024742302 +0000 UTC m=+1174.154024914" watchObservedRunningTime="2026-01-27 18:25:19.028110499 +0000 UTC m=+1174.157393121" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.041631 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podStartSLOduration=3.342441696 podStartE2EDuration="33.041615026s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.23070801 +0000 UTC m=+1143.359990622" lastFinishedPulling="2026-01-27 18:25:17.92988134 +0000 UTC m=+1173.059163952" observedRunningTime="2026-01-27 18:25:19.040609737 +0000 UTC m=+1174.169892349" watchObservedRunningTime="2026-01-27 18:25:19.041615026 +0000 UTC m=+1174.170897638" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.065186 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podStartSLOduration=2.873155174 podStartE2EDuration="33.065170081s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.734670871 +0000 UTC m=+1142.863953483" lastFinishedPulling="2026-01-27 18:25:17.926685778 +0000 UTC m=+1173.055968390" observedRunningTime="2026-01-27 18:25:19.060778926 +0000 UTC m=+1174.190061538" watchObservedRunningTime="2026-01-27 18:25:19.065170081 +0000 UTC m=+1174.194452693" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.110114 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podStartSLOduration=4.013508092 podStartE2EDuration="33.11009278s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.103882753 +0000 UTC m=+1144.233165355" lastFinishedPulling="2026-01-27 18:25:18.200467421 +0000 UTC m=+1173.329750043" observedRunningTime="2026-01-27 18:25:19.108602887 +0000 UTC m=+1174.237885499" watchObservedRunningTime="2026-01-27 18:25:19.11009278 +0000 UTC m=+1174.239375392" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.130665 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podStartSLOduration=27.372505757 podStartE2EDuration="33.130642799s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:25:12.168545026 +0000 UTC m=+1167.297827638" lastFinishedPulling="2026-01-27 18:25:17.926682068 +0000 UTC m=+1173.055964680" observedRunningTime="2026-01-27 18:25:19.130083533 +0000 UTC m=+1174.259366135" watchObservedRunningTime="2026-01-27 18:25:19.130642799 +0000 UTC m=+1174.259925411" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.160461 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podStartSLOduration=27.311594119 podStartE2EDuration="33.160440334s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:25:12.19694631 +0000 UTC m=+1167.326228922" lastFinishedPulling="2026-01-27 18:25:18.045792535 +0000 UTC m=+1173.175075137" observedRunningTime="2026-01-27 18:25:19.15960799 +0000 UTC m=+1174.288890622" watchObservedRunningTime="2026-01-27 18:25:19.160440334 +0000 UTC m=+1174.289722946" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.186453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.203493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7707f450-bf8d-4e84-9baa-a02bc80a0b22-webhook-certs\") pod \"openstack-operator-controller-manager-6f954ddc5b-fjchc\" (UID: \"7707f450-bf8d-4e84-9baa-a02bc80a0b22\") " pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.458959 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:19 crc kubenswrapper[4907]: W0127 18:25:19.945770 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7707f450_bf8d_4e84_9baa_a02bc80a0b22.slice/crio-dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4 WatchSource:0}: Error finding container dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4: Status 404 returned error can't find the container with id dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4 Jan 27 18:25:19 crc kubenswrapper[4907]: I0127 18:25:19.945841 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc"] Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.022769 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" event={"ID":"277579e8-58c3-4ad7-b902-e62f045ba8c6","Type":"ContainerStarted","Data":"8687d1219ca964b3de928510b6385dc80da90b8353abdc933b1e6113258ed971"} Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.022979 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.025105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" event={"ID":"7707f450-bf8d-4e84-9baa-a02bc80a0b22","Type":"ContainerStarted","Data":"dd135fe5cce7e3e9a427f4704fa7583de4772a43706c24b46483c2562870efc4"} Jan 27 18:25:20 crc kubenswrapper[4907]: I0127 18:25:20.036832 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podStartSLOduration=2.887386598 podStartE2EDuration="34.03681502s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.017360014 +0000 UTC m=+1143.146642626" lastFinishedPulling="2026-01-27 18:25:19.166788436 +0000 UTC m=+1174.296071048" observedRunningTime="2026-01-27 18:25:20.036163311 +0000 UTC m=+1175.165445933" watchObservedRunningTime="2026-01-27 18:25:20.03681502 +0000 UTC m=+1175.166097632" Jan 27 18:25:21 crc kubenswrapper[4907]: I0127 18:25:21.035800 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" event={"ID":"7707f450-bf8d-4e84-9baa-a02bc80a0b22","Type":"ContainerStarted","Data":"7cfc45be3b2e07dc1c7c5e34289626706c60ade81bc2578f1a6bc9c764b8726b"} Jan 27 18:25:21 crc kubenswrapper[4907]: I0127 18:25:21.076859 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podStartSLOduration=34.07684097 podStartE2EDuration="34.07684097s" podCreationTimestamp="2026-01-27 18:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:21.064470835 +0000 UTC m=+1176.193753457" watchObservedRunningTime="2026-01-27 18:25:21.07684097 +0000 UTC m=+1176.206123582" Jan 27 18:25:22 crc kubenswrapper[4907]: I0127 18:25:22.045457 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:23 crc kubenswrapper[4907]: I0127 18:25:23.083774 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.069816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e"} Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.070809 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.072280 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6"} Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.072544 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.098724 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podStartSLOduration=2.8572543809999997 podStartE2EDuration="38.098697841s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:47.99464893 +0000 UTC m=+1143.123931542" lastFinishedPulling="2026-01-27 18:25:23.23609238 +0000 UTC m=+1178.365375002" observedRunningTime="2026-01-27 18:25:24.088634312 +0000 UTC m=+1179.217917004" watchObservedRunningTime="2026-01-27 18:25:24.098697841 +0000 UTC m=+1179.227980493" Jan 27 18:25:24 crc kubenswrapper[4907]: I0127 18:25:24.112300 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podStartSLOduration=4.062847048 podStartE2EDuration="38.1122666s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.186208365 +0000 UTC m=+1144.315490977" lastFinishedPulling="2026-01-27 18:25:23.235627917 +0000 UTC m=+1178.364910529" observedRunningTime="2026-01-27 18:25:24.105361692 +0000 UTC m=+1179.234644344" watchObservedRunningTime="2026-01-27 18:25:24.1122666 +0000 UTC m=+1179.241549252" Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.085657 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da"} Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.086628 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:25:25 crc kubenswrapper[4907]: I0127 18:25:25.114113 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podStartSLOduration=4.064147139 podStartE2EDuration="39.114075924s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.129655296 +0000 UTC m=+1144.258937908" lastFinishedPulling="2026-01-27 18:25:24.179584071 +0000 UTC m=+1179.308866693" observedRunningTime="2026-01-27 18:25:25.102283026 +0000 UTC m=+1180.231565638" watchObservedRunningTime="2026-01-27 18:25:25.114075924 +0000 UTC m=+1180.243358576" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.094498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" event={"ID":"12b8e76f-853f-4eeb-b6c5-e77d05bec357","Type":"ContainerStarted","Data":"e1d9f4b07a05b53784b99f794aec021f7adb82a7ee18f9d2c992d0210f48e64b"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.094802 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.096286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.097432 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" event={"ID":"e257f81e-9460-4391-a7a5-cca3fc9230d9","Type":"ContainerStarted","Data":"afe0cabed815da7093f1942f41aed4f24204bb15d7cb9b08b6b20e3098e26d17"} Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.097839 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.113617 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podStartSLOduration=3.433439108 podStartE2EDuration="40.113600011s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.129652366 +0000 UTC m=+1144.258934978" lastFinishedPulling="2026-01-27 18:25:25.809813269 +0000 UTC m=+1180.939095881" observedRunningTime="2026-01-27 18:25:26.107522437 +0000 UTC m=+1181.236805049" watchObservedRunningTime="2026-01-27 18:25:26.113600011 +0000 UTC m=+1181.242882623" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.136003 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" podStartSLOduration=3.219314569 podStartE2EDuration="39.135980513s" podCreationTimestamp="2026-01-27 18:24:47 +0000 UTC" firstStartedPulling="2026-01-27 18:24:49.21692967 +0000 UTC m=+1144.346212282" lastFinishedPulling="2026-01-27 18:25:25.133595614 +0000 UTC m=+1180.262878226" observedRunningTime="2026-01-27 18:25:26.125275006 +0000 UTC m=+1181.254557638" watchObservedRunningTime="2026-01-27 18:25:26.135980513 +0000 UTC m=+1181.265263125" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.170924 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podStartSLOduration=3.1943348719999998 podStartE2EDuration="40.170894645s" podCreationTimestamp="2026-01-27 18:24:46 +0000 UTC" firstStartedPulling="2026-01-27 18:24:48.183280964 +0000 UTC m=+1143.312563566" lastFinishedPulling="2026-01-27 18:25:25.159840717 +0000 UTC m=+1180.289123339" observedRunningTime="2026-01-27 18:25:26.156329817 +0000 UTC m=+1181.285612459" watchObservedRunningTime="2026-01-27 18:25:26.170894645 +0000 UTC m=+1181.300177277" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.696260 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.773419 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.778404 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.835937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.888481 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 18:25:26 crc kubenswrapper[4907]: I0127 18:25:26.996687 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 18:25:27 crc kubenswrapper[4907]: I0127 18:25:27.188459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 18:25:27 crc kubenswrapper[4907]: I0127 18:25:27.209016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 18:25:29 crc kubenswrapper[4907]: I0127 18:25:29.465396 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" Jan 27 18:25:32 crc kubenswrapper[4907]: I0127 18:25:32.721753 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" Jan 27 18:25:36 crc kubenswrapper[4907]: I0127 18:25:36.800067 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.134163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.244275 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.518600 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" Jan 27 18:25:37 crc kubenswrapper[4907]: I0127 18:25:37.530035 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.275615 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.277448 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.282340 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.282423 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5p94z" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.284693 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.293005 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.293408 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.347699 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.349151 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.355133 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.356935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.356998 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.357091 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.363532 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459057 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459117 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459146 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.459240 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460118 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.460277 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.479238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"dnsmasq-dns-675f4bcbfc-pgtv9\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.479703 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"dnsmasq-dns-78dd6ddcc-pb7f4\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.600215 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:25:55 crc kubenswrapper[4907]: I0127 18:25:55.667263 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.078537 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.162202 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:56 crc kubenswrapper[4907]: W0127 18:25:56.162350 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a60a3a1_171b_4ea9_b6cc_a20aa1e219c3.slice/crio-dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db WatchSource:0}: Error finding container dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db: Status 404 returned error can't find the container with id dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.554906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" event={"ID":"145e21b1-c3a2-4057-a5e0-07e7d4196563","Type":"ContainerStarted","Data":"652e65a79ff5cb6213aeb319351d17981ccbb6f22938bc88043e8eeb5ebe6be2"} Jan 27 18:25:56 crc kubenswrapper[4907]: I0127 18:25:56.556941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" event={"ID":"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3","Type":"ContainerStarted","Data":"dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db"} Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.103637 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.147717 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.149532 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.196731 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.210848 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.210950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.211049 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.316706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.316991 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.317046 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.319093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.319985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.385072 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"dnsmasq-dns-666b6646f7-zqddl\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.449157 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.481927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.483987 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.488140 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.499666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.522618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626937 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.626967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.628221 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.629098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.678226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"dnsmasq-dns-57d769cc4f-jfqlq\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:58 crc kubenswrapper[4907]: I0127 18:25:58.966811 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.212023 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:25:59 crc kubenswrapper[4907]: W0127 18:25:59.222681 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode10199f9_f072_4566_ad76_a99c49596214.slice/crio-e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d WatchSource:0}: Error finding container e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d: Status 404 returned error can't find the container with id e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.266773 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.268498 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.275328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.277343 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.286351 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.287822 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288114 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288214 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288291 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288116 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288497 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.288906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q4549" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.289035 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.300101 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.312255 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.321799 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443843 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443881 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.443908 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444014 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444074 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444215 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444243 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444272 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444509 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444529 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444566 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444701 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444723 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444739 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444775 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444794 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444816 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444835 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.444921 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445010 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445066 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445095 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.445160 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546270 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546315 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546346 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546409 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546436 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546496 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546514 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546550 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546583 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546671 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546727 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546761 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546880 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546893 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.546947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547007 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547069 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.547505 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548422 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.548936 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.549382 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.551582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.552432 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.552470 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.554353 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.555255 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.555281 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e84612870a5c0c4830950c12b2fd6510f31530f3fd62287fde6ecf77067364b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.566006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567281 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567337 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ec55be154a66d09157b0ca2623a596d4c9f6b8adde5f16f336c822c2282072f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.567684 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568794 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.568955 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.569315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.569698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.571364 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.572187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.572826 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.574358 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.574843 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.575104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.575756 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.576190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.576369 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578231 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578318 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34bf333d34756f1b83dde2eb30c2397a83048a027d2708516d2de7b96e990e99/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.578726 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.582292 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.583955 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.590739 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.620902 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.624123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.630200 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.634818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " pod="openstack/rabbitmq-server-1" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.639682 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.640274 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerStarted","Data":"e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d"} Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.683512 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.687208 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.693837 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694137 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694282 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694396 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fl6zh" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.694347 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.695639 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.695820 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.713850 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751501 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751574 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751705 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.751800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853291 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.853491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854172 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854447 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854476 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.854781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.856023 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.856052 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d92d749e8b6234664dd57319b2b5b7962d8bfa8dc2f0d92cbae41209d539d4c4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.857381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.859233 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.859576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.860043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.866512 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.867666 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.876816 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.877806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.903287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.909510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:25:59 crc kubenswrapper[4907]: I0127 18:25:59.925339 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.018657 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.843238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.846363 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.851247 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mz4lj" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.852481 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853138 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853265 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.853565 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.859654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982670 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982735 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.982798 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983179 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:00 crc kubenswrapper[4907]: I0127 18:26:00.983325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085146 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085168 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085240 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085357 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.085730 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.086139 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-kolla-config\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.086428 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-config-data-default\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.087540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57d2b03-9116-4a79-bfc2-5b802cf62910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d2b03-9116-4a79-bfc2-5b802cf62910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091663 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.091684 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dbe06434949e9ae5912d882f776373a15677e014e71bfee6b8a0dccace93f9b2/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.102662 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf76f\" (UniqueName: \"kubernetes.io/projected/e57d2b03-9116-4a79-bfc2-5b802cf62910-kube-api-access-vf76f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.126793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15390c34-6a31-46a8-b94e-92cfd625dc3f\") pod \"openstack-galera-0\" (UID: \"e57d2b03-9116-4a79-bfc2-5b802cf62910\") " pod="openstack/openstack-galera-0" Jan 27 18:26:01 crc kubenswrapper[4907]: I0127 18:26:01.176980 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.303535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.306218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328483 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tbn4b" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328679 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328824 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.328963 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.341592 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409168 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.409961 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511159 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511390 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511415 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.511461 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.513051 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.513455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.515311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.515628 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b24ac54-7ca4-4b1a-b26c-41ce82025599-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.516161 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.516190 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69c22e915c983652805b9208a0c4a6ac775f245fa6a304f939ec9cd0ce7f310a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.523236 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.524119 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b24ac54-7ca4-4b1a-b26c-41ce82025599-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.534611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnzv\" (UniqueName: \"kubernetes.io/projected/0b24ac54-7ca4-4b1a-b26c-41ce82025599-kube-api-access-qgnzv\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.602074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-209e03dc-f375-4ed9-a4fa-ff1524246baf\") pod \"openstack-cell1-galera-0\" (UID: \"0b24ac54-7ca4-4b1a-b26c-41ce82025599\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.653927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.763636 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.764943 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.780468 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781046 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781253 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.781400 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ztsdz" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.819636 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820410 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.820778 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931617 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.931869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.935391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.936460 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-kolla-config\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.937692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/407bf5df-e69a-49ae-ac93-858be78d98a0-config-data\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.939080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/407bf5df-e69a-49ae-ac93-858be78d98a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:02 crc kubenswrapper[4907]: I0127 18:26:02.965922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln9g\" (UniqueName: \"kubernetes.io/projected/407bf5df-e69a-49ae-ac93-858be78d98a0-kube-api-access-zln9g\") pod \"memcached-0\" (UID: \"407bf5df-e69a-49ae-ac93-858be78d98a0\") " pod="openstack/memcached-0" Jan 27 18:26:03 crc kubenswrapper[4907]: I0127 18:26:03.086665 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:26:03 crc kubenswrapper[4907]: I0127 18:26:03.684976 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerStarted","Data":"6f8372a96157a4b3bb9b594a1bb14b4dea21ae1a28e8793346ac6d1505a183aa"} Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.464334 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.465590 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.468014 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dg7j2" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.478084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.569758 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.673333 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.723530 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"kube-state-metrics-0\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " pod="openstack/kube-state-metrics-0" Jan 27 18:26:04 crc kubenswrapper[4907]: I0127 18:26:04.795713 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.183950 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.187143 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.192938 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.194372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-xwgj8" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.197943 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.294192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.294384 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.395891 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.396011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: E0127 18:26:05.396236 4907 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 27 18:26:05 crc kubenswrapper[4907]: E0127 18:26:05.396321 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert podName:6ccb4875-977f-4fea-b3fa-8a4e4ba5a874 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:05.89629876 +0000 UTC m=+1221.025581372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert") pod "observability-ui-dashboards-66cbf594b5-s824m" (UID: "6ccb4875-977f-4fea-b3fa-8a4e4ba5a874") : secret "observability-ui-dashboards" not found Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.415757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-695jq\" (UniqueName: \"kubernetes.io/projected/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-kube-api-access-695jq\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.468647 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.470452 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.491450 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525158 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525258 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525333 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525404 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.525432 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627425 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627592 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627742 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627808 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.627896 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.628919 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-console-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.628947 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-service-ca\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.629204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-trusted-ca-bundle\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.629255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a2362241-225f-40e2-9be3-67766a65316b-oauth-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.646397 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-oauth-config\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.650590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2362241-225f-40e2-9be3-67766a65316b-console-serving-cert\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.655987 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkrw\" (UniqueName: \"kubernetes.io/projected/a2362241-225f-40e2-9be3-67766a65316b-kube-api-access-hgkrw\") pod \"console-7b674f54c6-zhrj9\" (UID: \"a2362241-225f-40e2-9be3-67766a65316b\") " pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.751970 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.769287 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.792259 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.792467 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793060 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.793386 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.794730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.794863 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.800307 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.807199 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.829241 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844302 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844438 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844785 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844838 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.844878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.845050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.845198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.948989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949025 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949072 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949097 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949200 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949222 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.949244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.950261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.950487 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.951119 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.954199 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.954347 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.955755 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.955786 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.957007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.961854 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccb4875-977f-4fea-b3fa-8a4e4ba5a874-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-s824m\" (UID: \"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.969226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.969722 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.979332 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:05 crc kubenswrapper[4907]: I0127 18:26:05.994374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.107434 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.113983 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:06 crc kubenswrapper[4907]: I0127 18:26:06.406377 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.400328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.401671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.405545 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-brfrw" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.406164 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.408338 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.411833 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.417069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.441400 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.477203 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497601 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.497976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498062 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498316 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498364 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498506 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.498815 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.599967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.600276 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.600567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-run\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601694 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602094 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.601380 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-lib\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602405 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.602972 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603154 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-var-log\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603744 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-run\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/daaea3c0-a88d-442f-be06-bb95b2825fcc-var-log-ovn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.603954 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-etc-ovs\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.604528 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-scripts\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.604704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daaea3c0-a88d-442f-be06-bb95b2825fcc-scripts\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.608203 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-ovn-controller-tls-certs\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.608413 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daaea3c0-a88d-442f-be06-bb95b2825fcc-combined-ca-bundle\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.618768 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbrn\" (UniqueName: \"kubernetes.io/projected/daaea3c0-a88d-442f-be06-bb95b2825fcc-kube-api-access-9tbrn\") pod \"ovn-controller-96prz\" (UID: \"daaea3c0-a88d-442f-be06-bb95b2825fcc\") " pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.619818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lhm\" (UniqueName: \"kubernetes.io/projected/89e5e512-03ab-41c7-8cde-1e20d1f72d0d-kube-api-access-n5lhm\") pod \"ovn-controller-ovs-2q6jk\" (UID: \"89e5e512-03ab-41c7-8cde-1e20d1f72d0d\") " pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.719205 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.723043 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.725028 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728155 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728447 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728727 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728788 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8dj9z" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.728965 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.758823 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.758986 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808489 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808570 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808625 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.808698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809126 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.809369 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911547 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911632 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911672 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911753 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.911794 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.913018 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.913264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-config\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917165 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917220 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac3443108e8bcdcc20f6b358fb921b68da41b27927115ae8c43a5ccab21823c7/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.917984 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32811f4d-c205-437d-a06c-ac4fff30cead-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.918135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.921183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.928795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x765h\" (UniqueName: \"kubernetes.io/projected/32811f4d-c205-437d-a06c-ac4fff30cead-kube-api-access-x765h\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.936488 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32811f4d-c205-437d-a06c-ac4fff30cead-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:08 crc kubenswrapper[4907]: I0127 18:26:08.959036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85ad384a-b246-4dfd-8ee8-7bd93e8dd130\") pod \"ovsdbserver-nb-0\" (UID: \"32811f4d-c205-437d-a06c-ac4fff30cead\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:09 crc kubenswrapper[4907]: I0127 18:26:09.051358 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.115310 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.125830 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.134001 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.134909 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.135118 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.135536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qbnbr" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.153120 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182893 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182933 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182957 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.182988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183034 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.183071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284790 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.284975 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286792 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.286932 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7185e8ed-9479-43cc-814b-cfcd26e548a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.289687 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.289716 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bfc67f3587034b5cebb2487e0145ff94b00c6aece8013bfb7f24f752b6cedc8b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.290588 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.292961 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.293431 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7185e8ed-9479-43cc-814b-cfcd26e548a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.302630 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzx5\" (UniqueName: \"kubernetes.io/projected/7185e8ed-9479-43cc-814b-cfcd26e548a5-kube-api-access-9gzx5\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.321251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecc8b970-c82e-4ce6-ab25-a778ccde6659\") pod \"ovsdbserver-sb-0\" (UID: \"7185e8ed-9479-43cc-814b-cfcd26e548a5\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: I0127 18:26:12.456901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.956703 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.957369 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb5dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pgtv9_openstack(145e21b1-c3a2-4057-a5e0-07e7d4196563): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:26:12 crc kubenswrapper[4907]: E0127 18:26:12.958646 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" podUID="145e21b1-c3a2-4057-a5e0-07e7d4196563" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.054328 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.054504 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg5f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pb7f4_openstack(8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:26:13 crc kubenswrapper[4907]: E0127 18:26:13.055798 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" podUID="8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.788786 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.799202 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.824449 4907 generic.go:334] "Generic (PLEG): container finished" podID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerID="912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf" exitCode=0 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.824637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf"} Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.825932 4907 generic.go:334] "Generic (PLEG): container finished" podID="e10199f9-f072-4566-ad76-a99c49596214" containerID="7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34" exitCode=0 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.826038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34"} Jan 27 18:26:13 crc kubenswrapper[4907]: W0127 18:26:13.834312 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d050d2_eeb4_4603_a6c4_1cbdd454ea35.slice/crio-0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea WatchSource:0}: Error finding container 0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea: Status 404 returned error can't find the container with id 0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea Jan 27 18:26:13 crc kubenswrapper[4907]: W0127 18:26:13.840583 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52cb02a9_7a60_4761_9770_a9b6910f1088.slice/crio-ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40 WatchSource:0}: Error finding container ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40: Status 404 returned error can't find the container with id ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40 Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.975994 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:26:13 crc kubenswrapper[4907]: I0127 18:26:13.993250 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.098030 4907 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 18:26:14 crc kubenswrapper[4907]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:26:14 crc kubenswrapper[4907]: > podSandboxID="e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d" Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.098182 4907 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 18:26:14 crc kubenswrapper[4907]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpr4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zqddl_openstack(e10199f9-f072-4566-ad76-a99c49596214): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:26:14 crc kubenswrapper[4907]: > logger="UnhandledError" Jan 27 18:26:14 crc kubenswrapper[4907]: E0127 18:26:14.099374 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podUID="e10199f9-f072-4566-ad76-a99c49596214" Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.840201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.843597 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"3251ec13ecf2d816f4249d2d95826865dd55f4c6e4f346e728a8820870c8122f"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.845075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.847890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerStarted","Data":"d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.849062 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.851047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"f438ce9452f05a2c33576c461be5d8342246dc4a389096e1ff8d110a343a2c82"} Jan 27 18:26:14 crc kubenswrapper[4907]: I0127 18:26:14.879125 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" podStartSLOduration=7.191630988 podStartE2EDuration="16.87910622s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:03.457622845 +0000 UTC m=+1218.586905457" lastFinishedPulling="2026-01-27 18:26:13.145098077 +0000 UTC m=+1228.274380689" observedRunningTime="2026-01-27 18:26:14.870364999 +0000 UTC m=+1229.999647631" watchObservedRunningTime="2026-01-27 18:26:14.87910622 +0000 UTC m=+1230.008388832" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.084143 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.121856 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.130129 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.135151 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-s824m"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.146059 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.149689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174621 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174716 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") pod \"145e21b1-c3a2-4057-a5e0-07e7d4196563\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174869 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") pod \"145e21b1-c3a2-4057-a5e0-07e7d4196563\" (UID: \"145e21b1-c3a2-4057-a5e0-07e7d4196563\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.174986 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") pod \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\" (UID: \"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3\") " Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.175988 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config" (OuterVolumeSpecName: "config") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.176368 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.177369 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config" (OuterVolumeSpecName: "config") pod "145e21b1-c3a2-4057-a5e0-07e7d4196563" (UID: "145e21b1-c3a2-4057-a5e0-07e7d4196563"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.182840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz" (OuterVolumeSpecName: "kube-api-access-hb5dz") pod "145e21b1-c3a2-4057-a5e0-07e7d4196563" (UID: "145e21b1-c3a2-4057-a5e0-07e7d4196563"). InnerVolumeSpecName "kube-api-access-hb5dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.183162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8" (OuterVolumeSpecName: "kube-api-access-jg5f8") pod "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" (UID: "8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3"). InnerVolumeSpecName "kube-api-access-jg5f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.278595 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279075 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5dz\" (UniqueName: \"kubernetes.io/projected/145e21b1-c3a2-4057-a5e0-07e7d4196563-kube-api-access-hb5dz\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279094 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg5f8\" (UniqueName: \"kubernetes.io/projected/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-kube-api-access-jg5f8\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279110 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/145e21b1-c3a2-4057-a5e0-07e7d4196563-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.279123 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.337062 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.366863 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b674f54c6-zhrj9"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.376844 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.384870 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.578810 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:26:15 crc kubenswrapper[4907]: W0127 18:26:15.599473 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7185e8ed_9479_43cc_814b_cfcd26e548a5.slice/crio-999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e WatchSource:0}: Error finding container 999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e: Status 404 returned error can't find the container with id 999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.867312 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerStarted","Data":"3673b3443d4ba1d7f90e11d19590b6b725d3fd74d821289ba3dea4614690e212"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.869282 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.869346 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pgtv9" event={"ID":"145e21b1-c3a2-4057-a5e0-07e7d4196563","Type":"ContainerDied","Data":"652e65a79ff5cb6213aeb319351d17981ccbb6f22938bc88043e8eeb5ebe6be2"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.877302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" event={"ID":"8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3","Type":"ContainerDied","Data":"dab5f0e31f397d81534b723a836be60853b5e1f79747336bc85804357e4251db"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.877322 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pb7f4" Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.892564 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"956c237313205063c524040dbf960d0b2cac134f53ea06e957d78656bbf34f54"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.894099 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" event={"ID":"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874","Type":"ContainerStarted","Data":"a54cb68638961281cd1c54d4a92084b9e4c3140b27b54ce9b0a6937b829acc51"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.897886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"999fe6f31782cc128e55073d26a7acd4f2a987130e12d5c0d5150010580e361e"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.909664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"407bf5df-e69a-49ae-ac93-858be78d98a0","Type":"ContainerStarted","Data":"4920ff6ff351a12d39c7685db263fbb989815ef22fa582507cdc66281c7dc4ed"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.913645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.913694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"a663a2023da5a32f6bd5f0183836e1e7fbb066f93180133dd07965669a43de50"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.918182 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerStarted","Data":"5bf7de1b06e0edb9333c802c1f12063b0b99ab1bc4bdfce12c369626362c9e4b"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.921814 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz" event={"ID":"daaea3c0-a88d-442f-be06-bb95b2825fcc","Type":"ContainerStarted","Data":"35a118a32372ba6e550f32dc8677672a4c6b740670e8816571c1cb5981cd69d2"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.923540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"1d837f51b3b5b34571907a5beab1539e68355f711343794293643ca8d2f93b93"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.927906 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerStarted","Data":"f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038"} Jan 27 18:26:15 crc kubenswrapper[4907]: I0127 18:26:15.928720 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.059747 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.069609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pb7f4"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.070708 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podStartSLOduration=3.964041906 podStartE2EDuration="18.070691577s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:25:59.225108389 +0000 UTC m=+1214.354391001" lastFinishedPulling="2026-01-27 18:26:13.33175806 +0000 UTC m=+1228.461040672" observedRunningTime="2026-01-27 18:26:16.051530487 +0000 UTC m=+1231.180813099" watchObservedRunningTime="2026-01-27 18:26:16.070691577 +0000 UTC m=+1231.199974189" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.111670 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.120226 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pgtv9"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.125040 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b674f54c6-zhrj9" podStartSLOduration=11.125026845 podStartE2EDuration="11.125026845s" podCreationTimestamp="2026-01-27 18:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:16.095524919 +0000 UTC m=+1231.224807531" watchObservedRunningTime="2026-01-27 18:26:16.125026845 +0000 UTC m=+1231.254309447" Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.331317 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:26:16 crc kubenswrapper[4907]: I0127 18:26:16.474003 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2q6jk"] Jan 27 18:26:17 crc kubenswrapper[4907]: W0127 18:26:17.308244 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32811f4d_c205_437d_a06c_ac4fff30cead.slice/crio-8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832 WatchSource:0}: Error finding container 8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832: Status 404 returned error can't find the container with id 8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832 Jan 27 18:26:17 crc kubenswrapper[4907]: W0127 18:26:17.314684 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e5e512_03ab_41c7_8cde_1e20d1f72d0d.slice/crio-34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9 WatchSource:0}: Error finding container 34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9: Status 404 returned error can't find the container with id 34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9 Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.761612 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145e21b1-c3a2-4057-a5e0-07e7d4196563" path="/var/lib/kubelet/pods/145e21b1-c3a2-4057-a5e0-07e7d4196563/volumes" Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.764151 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3" path="/var/lib/kubelet/pods/8a60a3a1-171b-4ea9-b6cc-a20aa1e219c3/volumes" Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.944608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"34a0523e9b10c74f6238664c343423393b5f5f70f0310f4243b48221328b58a9"} Jan 27 18:26:17 crc kubenswrapper[4907]: I0127 18:26:17.945815 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"8a4d43d5a559b1ac93ccbe460fba8638e238eaab9c0e5705ea47fc24e005b832"} Jan 27 18:26:23 crc kubenswrapper[4907]: I0127 18:26:23.485824 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:23 crc kubenswrapper[4907]: I0127 18:26:23.969467 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:24 crc kubenswrapper[4907]: I0127 18:26:24.039243 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:24 crc kubenswrapper[4907]: I0127 18:26:24.043026 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" containerID="cri-o://f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" gracePeriod=10 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.054618 4907 generic.go:334] "Generic (PLEG): container finished" podID="e10199f9-f072-4566-ad76-a99c49596214" containerID="f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" exitCode=0 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.054691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.058253 4907 generic.go:334] "Generic (PLEG): container finished" podID="89e5e512-03ab-41c7-8cde-1e20d1f72d0d" containerID="83179941e673cfd9fba83014df5d73487d53c0b1a94a5def74609dda5504f985" exitCode=0 Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.058330 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerDied","Data":"83179941e673cfd9fba83014df5d73487d53c0b1a94a5def74609dda5504f985"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.062399 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.064629 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz" event={"ID":"daaea3c0-a88d-442f-be06-bb95b2825fcc","Type":"ContainerStarted","Data":"5b92df5545c5ea8c056c2f10cd2cdf8a4bb604d3ef980b6dbe0c89ff124e7045"} Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.064767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-96prz" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.141580 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz" podStartSLOduration=9.289736801 podStartE2EDuration="17.141547773s" podCreationTimestamp="2026-01-27 18:26:08 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.359510919 +0000 UTC m=+1230.488793531" lastFinishedPulling="2026-01-27 18:26:23.211321891 +0000 UTC m=+1238.340604503" observedRunningTime="2026-01-27 18:26:25.115906357 +0000 UTC m=+1240.245188989" watchObservedRunningTime="2026-01-27 18:26:25.141547773 +0000 UTC m=+1240.270830385" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.517763 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616570 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616744 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.616956 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") pod \"e10199f9-f072-4566-ad76-a99c49596214\" (UID: \"e10199f9-f072-4566-ad76-a99c49596214\") " Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.622454 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d" (OuterVolumeSpecName: "kube-api-access-jpr4d") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "kube-api-access-jpr4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.724520 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpr4d\" (UniqueName: \"kubernetes.io/projected/e10199f9-f072-4566-ad76-a99c49596214-kube-api-access-jpr4d\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.829844 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.830499 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:25 crc kubenswrapper[4907]: I0127 18:26:25.838825 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081135 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" event={"ID":"e10199f9-f072-4566-ad76-a99c49596214","Type":"ContainerDied","Data":"e658e65eaac3bf3792b88640344c5c2ebf8267bf6a569d2616e61c89cbae756d"} Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081149 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqddl" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.081295 4907 scope.go:117] "RemoveContainer" containerID="f96fb0dce830aa200204ea5c77ba4a00a44345dbe24b04716a38900624b63038" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.084835 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6"} Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.089163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.120356 4907 scope.go:117] "RemoveContainer" containerID="7daaccfc37ac15f3139901ecde0bd9f2893f0a588b9db9d90f5cf0512ba3ae34" Jan 27 18:26:26 crc kubenswrapper[4907]: I0127 18:26:26.189020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.283313 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.355060 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.982787 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config" (OuterVolumeSpecName: "config") pod "e10199f9-f072-4566-ad76-a99c49596214" (UID: "e10199f9-f072-4566-ad76-a99c49596214"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:27 crc kubenswrapper[4907]: I0127 18:26:27.986071 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10199f9-f072-4566-ad76-a99c49596214-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.189666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" podStartSLOduration=14.91194422 podStartE2EDuration="23.189649627s" podCreationTimestamp="2026-01-27 18:26:05 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.131932721 +0000 UTC m=+1230.261215333" lastFinishedPulling="2026-01-27 18:26:23.409638098 +0000 UTC m=+1238.538920740" observedRunningTime="2026-01-27 18:26:28.187290669 +0000 UTC m=+1243.316573281" watchObservedRunningTime="2026-01-27 18:26:28.189649627 +0000 UTC m=+1243.318932239" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.246742 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.189278363 podStartE2EDuration="26.246726104s" podCreationTimestamp="2026-01-27 18:26:02 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.15385717 +0000 UTC m=+1230.283139782" lastFinishedPulling="2026-01-27 18:26:23.211304901 +0000 UTC m=+1238.340587523" observedRunningTime="2026-01-27 18:26:28.24207298 +0000 UTC m=+1243.371355602" watchObservedRunningTime="2026-01-27 18:26:28.246726104 +0000 UTC m=+1243.376008716" Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.347995 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348303 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"731d33de975db175e8d1cd57a0c798460c5494dff18b12d28a0dbfb0b3819d7a"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348321 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348331 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"e7dbc577c77fbfea482195487976749b0cc3192db114070a33afd27833b407e1"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348343 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"1eff7c3a7c24361045db808712147763696305ae92862383f68d7f81ed7ad178"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348375 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348394 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-s824m" event={"ID":"6ccb4875-977f-4fea-b3fa-8a4e4ba5a874","Type":"ContainerStarted","Data":"13f9d2172cd6b14bbff4bf83ffadbbdaab9c2902a883c61569ab7017d03a58b8"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.348405 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"407bf5df-e69a-49ae-ac93-858be78d98a0","Type":"ContainerStarted","Data":"4a4deadd6f20a2b4edaf80cef580afa7e37101f95af38ed4ed51e80b20296292"} Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.386458 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:28 crc kubenswrapper[4907]: I0127 18:26:28.397252 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqddl"] Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2q6jk" event={"ID":"89e5e512-03ab-41c7-8cde-1e20d1f72d0d","Type":"ContainerStarted","Data":"e464b2df2751ae75bd57453f56d51a3f252e84a15a2ac488852d31b7a961a145"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130756 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.130773 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.132653 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerStarted","Data":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.132708 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.134378 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerStarted","Data":"ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03"} Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.134447 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" containerID="cri-o://ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" gracePeriod=600 Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.135517 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.159162 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2q6jk" podStartSLOduration=15.275794088 podStartE2EDuration="21.159143443s" podCreationTimestamp="2026-01-27 18:26:08 +0000 UTC" firstStartedPulling="2026-01-27 18:26:17.328197283 +0000 UTC m=+1232.457479895" lastFinishedPulling="2026-01-27 18:26:23.211546618 +0000 UTC m=+1238.340829250" observedRunningTime="2026-01-27 18:26:29.149657321 +0000 UTC m=+1244.278939943" watchObservedRunningTime="2026-01-27 18:26:29.159143443 +0000 UTC m=+1244.288426065" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.179983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.690520177 podStartE2EDuration="25.17995103s" podCreationTimestamp="2026-01-27 18:26:04 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.347902126 +0000 UTC m=+1230.477184738" lastFinishedPulling="2026-01-27 18:26:25.837332979 +0000 UTC m=+1240.966615591" observedRunningTime="2026-01-27 18:26:29.173451274 +0000 UTC m=+1244.302733886" watchObservedRunningTime="2026-01-27 18:26:29.17995103 +0000 UTC m=+1244.309233652" Jan 27 18:26:29 crc kubenswrapper[4907]: I0127 18:26:29.762581 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10199f9-f072-4566-ad76-a99c49596214" path="/var/lib/kubelet/pods/e10199f9-f072-4566-ad76-a99c49596214/volumes" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.156449 4907 generic.go:334] "Generic (PLEG): container finished" podID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerID="994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906" exitCode=0 Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.156764 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerDied","Data":"994bcd73441577f1686e2507659ece4419cf6c1439182a16ddd4180ef5f67906"} Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.847596 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:31 crc kubenswrapper[4907]: E0127 18:26:31.848200 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848217 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: E0127 18:26:31.848238 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="init" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848244 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="init" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.848413 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10199f9-f072-4566-ad76-a99c49596214" containerName="dnsmasq-dns" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.849057 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.854829 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.870725 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874752 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874822 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874935 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.874962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.875013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.875204 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986449 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986600 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986723 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986811 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.986979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovn-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.987065 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/af6ab393-1e13-4683-81ae-6e28d9261d30-ovs-rundir\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.987773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6ab393-1e13-4683-81ae-6e28d9261d30-config\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.993494 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:31 crc kubenswrapper[4907]: I0127 18:26:31.998841 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ab393-1e13-4683-81ae-6e28d9261d30-combined-ca-bundle\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.012217 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.024861 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.028101 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.032451 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.036582 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s649c\" (UniqueName: \"kubernetes.io/projected/af6ab393-1e13-4683-81ae-6e28d9261d30-kube-api-access-s649c\") pod \"ovn-controller-metrics-jxkhc\" (UID: \"af6ab393-1e13-4683-81ae-6e28d9261d30\") " pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088452 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.088996 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.173864 4907 generic.go:334] "Generic (PLEG): container finished" podID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerID="644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b" exitCode=0 Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.173920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerDied","Data":"644a0a868f8dd791c393c93f9e9b10f1d83e6fb8fded0efd421841f46facbc3b"} Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.178859 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxkhc" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.190825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.190900 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191022 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191767 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.191811 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.192008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.221744 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:32 crc kubenswrapper[4907]: E0127 18:26:32.222593 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ggs7z], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" podUID="b1da3ecb-de7c-4586-b873-8c837b0bb690" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.224308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"dnsmasq-dns-5bf47b49b7-vtsmf\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.233762 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.236299 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.239402 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.252982 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395860 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395922 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.395987 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.396055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497840 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.497942 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498284 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498361 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.498852 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.499444 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.532616 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"dnsmasq-dns-8554648995-9g4ks\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:32 crc kubenswrapper[4907]: I0127 18:26:32.610329 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.090704 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.181227 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.191366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.313895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314187 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314267 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") pod \"b1da3ecb-de7c-4586-b873-8c837b0bb690\" (UID: \"b1da3ecb-de7c-4586-b873-8c837b0bb690\") " Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314616 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.314660 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config" (OuterVolumeSpecName: "config") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315097 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315117 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.315127 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1da3ecb-de7c-4586-b873-8c837b0bb690-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.333627 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z" (OuterVolumeSpecName: "kube-api-access-ggs7z") pod "b1da3ecb-de7c-4586-b873-8c837b0bb690" (UID: "b1da3ecb-de7c-4586-b873-8c837b0bb690"). InnerVolumeSpecName "kube-api-access-ggs7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:33 crc kubenswrapper[4907]: I0127 18:26:33.416945 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggs7z\" (UniqueName: \"kubernetes.io/projected/b1da3ecb-de7c-4586-b873-8c837b0bb690-kube-api-access-ggs7z\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.190863 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vtsmf" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.237237 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.246906 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vtsmf"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.689508 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.742756 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.744824 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.764656 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.805452 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843189 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843319 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843458 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.843596 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946008 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946137 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946238 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.946295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947231 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.947368 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:34 crc kubenswrapper[4907]: I0127 18:26:34.972585 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"dnsmasq-dns-b8fbc5445-d92b2\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.065045 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.202040 4907 generic.go:334] "Generic (PLEG): container finished" podID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerID="ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" exitCode=0 Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.202086 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerDied","Data":"ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03"} Jan 27 18:26:35 crc kubenswrapper[4907]: I0127 18:26:35.765065 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1da3ecb-de7c-4586-b873-8c837b0bb690" path="/var/lib/kubelet/pods/b1da3ecb-de7c-4586-b873-8c837b0bb690/volumes" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.005085 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176314 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176637 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.176930 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177015 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177115 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177152 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177176 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177245 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177323 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") pod \"f21c64d8-b95d-460b-a32f-1498c725d8e8\" (UID: \"f21c64d8-b95d-460b-a32f-1498c725d8e8\") " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177395 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177416 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.177907 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.183072 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm" (OuterVolumeSpecName: "kube-api-access-6b9vm") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "kube-api-access-6b9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.183219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186347 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186374 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.186699 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.189903 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out" (OuterVolumeSpecName: "config-out") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.194787 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config" (OuterVolumeSpecName: "web-config") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.195764 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config" (OuterVolumeSpecName: "config") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.207447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f21c64d8-b95d-460b-a32f-1498c725d8e8" (UID: "f21c64d8-b95d-460b-a32f-1498c725d8e8"). InnerVolumeSpecName "pvc-f7807fd9-6025-4711-8134-26e284a305f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214178 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f21c64d8-b95d-460b-a32f-1498c725d8e8","Type":"ContainerDied","Data":"5bf7de1b06e0edb9333c802c1f12063b0b99ab1bc4bdfce12c369626362c9e4b"} Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214228 4907 scope.go:117] "RemoveContainer" containerID="ace3a29956f71df4b0ebe7683ee7e04480a1c89cf3ff804cb03f1121b9b98d03" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.214362 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.218358 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.219007 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.219032 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.219223 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" containerName="init-config-reloader" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.225198 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.226239 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.231903 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232013 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232241 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.232431 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fwnt8" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289429 4907 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f21c64d8-b95d-460b-a32f-1498c725d8e8-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289471 4907 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289485 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f21c64d8-b95d-460b-a32f-1498c725d8e8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289530 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9vm\" (UniqueName: \"kubernetes.io/projected/f21c64d8-b95d-460b-a32f-1498c725d8e8-kube-api-access-6b9vm\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289544 4907 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289576 4907 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289622 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" " Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.289734 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f21c64d8-b95d-460b-a32f-1498c725d8e8-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.363595 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.363764 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7807fd9-6025-4711-8134-26e284a305f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6") on node "crc" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.387623 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.395890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.395985 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396194 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396213 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.396300 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.411258 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.428061 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.430622 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.432725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436750 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436828 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.436955 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437053 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437117 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.437189 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.441924 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.442879 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.462874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501572 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501594 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: E0127 18:26:36.501647 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:37.001628588 +0000 UTC m=+1252.130911200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501673 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501821 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.501957 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-lock\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.502568 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7e986b-1dca-4795-85f7-e62cdd92d995-cache\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.504534 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.504600 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/baac55f4b52b016f489a503d203379da7c075c77f069958efa80bd975760c269/globalmount\"" pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.511371 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e986b-1dca-4795-85f7-e62cdd92d995-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.530299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6x4\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-kube-api-access-pj6x4\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.557516 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ccf935cc-7fb0-4bb1-80da-3bd0cdc838b9\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.603986 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604118 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604430 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604755 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604909 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.604941 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.605050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.669610 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxkhc"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.677161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:26:36 crc kubenswrapper[4907]: W0127 18:26:36.678697 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6ab393_1e13_4683_81ae_6e28d9261d30.slice/crio-fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0 WatchSource:0}: Error finding container fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0: Status 404 returned error can't find the container with id fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0 Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.706962 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707024 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707052 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707081 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707535 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707605 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707628 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.707709 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.710393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.710787 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.711295 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.711345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.714354 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.714415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.716607 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.716637 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.717520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.717788 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.724839 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.785708 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.841763 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.843491 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.845774 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.845978 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.846073 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 18:26:36 crc kubenswrapper[4907]: I0127 18:26:36.879161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014760 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014837 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014919 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.014988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.015013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015232 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015255 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: E0127 18:26:37.015297 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:38.01528065 +0000 UTC m=+1253.144563282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.060789 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117445 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117791 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.117871 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118134 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118521 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.118819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.126799 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.127018 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.127279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.147155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.154031 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.155737 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.168856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"swift-ring-rebalance-m9rr7\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.184755 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.233727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7185e8ed-9479-43cc-814b-cfcd26e548a5","Type":"ContainerStarted","Data":"ec50a24abfc0c0adfac3c4f21b0754b6c82db16ffad97b48fb17b61de62c590f"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238490 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerID="9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a" exitCode=0 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238549 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.238593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerStarted","Data":"14af702ae586c0a32f8c72ffae79c9a4feed72f954b871e63a6bcedfd4970e82"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243350 4907 generic.go:334] "Generic (PLEG): container finished" podID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerID="74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8" exitCode=0 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerDied","Data":"74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.243493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerStarted","Data":"ec2385fbeb3a3eca6fc6a574c69026c728ef33238e2a6408701718a795124c05"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.268389 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.814410728 podStartE2EDuration="26.268367969s" podCreationTimestamp="2026-01-27 18:26:11 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.601806388 +0000 UTC m=+1230.731089000" lastFinishedPulling="2026-01-27 18:26:36.055763629 +0000 UTC m=+1251.185046241" observedRunningTime="2026-01-27 18:26:37.260525214 +0000 UTC m=+1252.389807826" watchObservedRunningTime="2026-01-27 18:26:37.268367969 +0000 UTC m=+1252.397650581" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.282156 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxkhc" event={"ID":"af6ab393-1e13-4683-81ae-6e28d9261d30","Type":"ContainerStarted","Data":"fbd0e86a97b507a00f4029b3fee02cab6c91e482fa73b49fa373e195f393dc8b"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.282202 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxkhc" event={"ID":"af6ab393-1e13-4683-81ae-6e28d9261d30","Type":"ContainerStarted","Data":"fbf7b40984d8adfc57eaa48b8470d2ac19168c51cb440d9e1c53816673d51df0"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.288094 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.292243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"32811f4d-c205-437d-a06c-ac4fff30cead","Type":"ContainerStarted","Data":"75d4a84af76842834e842b929a1d5d5a1f48c55a0bd8e730ac57db71f2d6f431"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.297204 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.344284 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.71478229 podStartE2EDuration="38.344258596s" podCreationTimestamp="2026-01-27 18:25:59 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.129627045 +0000 UTC m=+1230.258909657" lastFinishedPulling="2026-01-27 18:26:22.759103351 +0000 UTC m=+1237.888385963" observedRunningTime="2026-01-27 18:26:37.337483441 +0000 UTC m=+1252.466766053" watchObservedRunningTime="2026-01-27 18:26:37.344258596 +0000 UTC m=+1252.473541208" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.401637 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.344406218 podStartE2EDuration="36.401610601s" podCreationTimestamp="2026-01-27 18:26:01 +0000 UTC" firstStartedPulling="2026-01-27 18:26:15.154291283 +0000 UTC m=+1230.283573895" lastFinishedPulling="2026-01-27 18:26:23.211495666 +0000 UTC m=+1238.340778278" observedRunningTime="2026-01-27 18:26:37.364528947 +0000 UTC m=+1252.493811549" watchObservedRunningTime="2026-01-27 18:26:37.401610601 +0000 UTC m=+1252.530893243" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.419391 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.671493291 podStartE2EDuration="30.41936783s" podCreationTimestamp="2026-01-27 18:26:07 +0000 UTC" firstStartedPulling="2026-01-27 18:26:17.311040691 +0000 UTC m=+1232.440323303" lastFinishedPulling="2026-01-27 18:26:36.05891523 +0000 UTC m=+1251.188197842" observedRunningTime="2026-01-27 18:26:37.390267335 +0000 UTC m=+1252.519549947" watchObservedRunningTime="2026-01-27 18:26:37.41936783 +0000 UTC m=+1252.548650452" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.458350 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.465604 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jxkhc" podStartSLOduration=6.465583475 podStartE2EDuration="6.465583475s" podCreationTimestamp="2026-01-27 18:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:37.408297922 +0000 UTC m=+1252.537580534" watchObservedRunningTime="2026-01-27 18:26:37.465583475 +0000 UTC m=+1252.594866087" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.681733 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:26:37 crc kubenswrapper[4907]: W0127 18:26:37.693919 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d384d2_43f4_4290_837f_fb784fc28b37.slice/crio-fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68 WatchSource:0}: Error finding container fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68: Status 404 returned error can't find the container with id fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68 Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.807011 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21c64d8-b95d-460b-a32f-1498c725d8e8" path="/var/lib/kubelet/pods/f21c64d8-b95d-460b-a32f-1498c725d8e8/volumes" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.867181 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941617 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.941907 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.942093 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.942244 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") pod \"ebc9482a-0b0e-48d7-8409-83be16d41469\" (UID: \"ebc9482a-0b0e-48d7-8409-83be16d41469\") " Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.975173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7" (OuterVolumeSpecName: "kube-api-access-dj6q7") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "kube-api-access-dj6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:37 crc kubenswrapper[4907]: I0127 18:26:37.982869 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m9rr7"] Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.003231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.013661 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.024928 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.041085 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config" (OuterVolumeSpecName: "config") pod "ebc9482a-0b0e-48d7-8409-83be16d41469" (UID: "ebc9482a-0b0e-48d7-8409-83be16d41469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045065 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045667 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045765 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6q7\" (UniqueName: \"kubernetes.io/projected/ebc9482a-0b0e-48d7-8409-83be16d41469-kube-api-access-dj6q7\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045871 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.045963 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.046046 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc9482a-0b0e-48d7-8409-83be16d41469-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046229 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046351 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: E0127 18:26:38.046462 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:40.046443535 +0000 UTC m=+1255.175726147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.308880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerStarted","Data":"2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.309053 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.310885 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-9g4ks" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.310873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-9g4ks" event={"ID":"ebc9482a-0b0e-48d7-8409-83be16d41469","Type":"ContainerDied","Data":"ec2385fbeb3a3eca6fc6a574c69026c728ef33238e2a6408701718a795124c05"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.311193 4907 scope.go:117] "RemoveContainer" containerID="74d63fb45a9534d72aef7e31bb8862ecfcd262e095d5005a3061558d89bdb0e8" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.312097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerStarted","Data":"78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.315294 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68"} Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.338089 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" podStartSLOduration=4.338073419 podStartE2EDuration="4.338073419s" podCreationTimestamp="2026-01-27 18:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:38.332854089 +0000 UTC m=+1253.462136721" watchObservedRunningTime="2026-01-27 18:26:38.338073419 +0000 UTC m=+1253.467356021" Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.435037 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:38 crc kubenswrapper[4907]: I0127 18:26:38.445359 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-9g4ks"] Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.052001 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.052267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.101694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.370665 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.457518 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: E0127 18:26:39.468047 4907 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:45994->38.102.83.184:45697: write tcp 38.102.83.184:45994->38.102.83.184:45697: write: broken pipe Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.509583 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:39 crc kubenswrapper[4907]: I0127 18:26:39.762848 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" path="/var/lib/kubelet/pods/ebc9482a-0b0e-48d7-8409-83be16d41469/volumes" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.094052 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094242 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094479 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.094525 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:44.094509487 +0000 UTC m=+1259.223792099 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.390508 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.656319 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:40 crc kubenswrapper[4907]: E0127 18:26:40.657011 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.657081 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.657339 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc9482a-0b0e-48d7-8409-83be16d41469" containerName="init" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.658836 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.660881 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.661316 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.661545 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.662374 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8tg87" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.689161 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814338 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814409 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814531 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814664 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814704 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.814998 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.815065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916910 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.916969 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.917054 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918116 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-scripts\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.918285 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f5ec64-0863-45ef-9090-4768ecd34667-config\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.923850 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.924330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.925399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f5ec64-0863-45ef-9090-4768ecd34667-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.936718 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk2pq\" (UniqueName: \"kubernetes.io/projected/c4f5ec64-0863-45ef-9090-4768ecd34667-kube-api-access-xk2pq\") pod \"ovn-northd-0\" (UID: \"c4f5ec64-0863-45ef-9090-4768ecd34667\") " pod="openstack/ovn-northd-0" Jan 27 18:26:40 crc kubenswrapper[4907]: I0127 18:26:40.981520 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.178142 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.178195 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.354916 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e"} Jan 27 18:26:41 crc kubenswrapper[4907]: I0127 18:26:41.987545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:26:41 crc kubenswrapper[4907]: W0127 18:26:41.989519 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f5ec64_0863_45ef_9090_4768ecd34667.slice/crio-b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d WatchSource:0}: Error finding container b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d: Status 404 returned error can't find the container with id b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.366174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerStarted","Data":"ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b"} Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.367461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"b4be161cbd3e6a9d199777f2bba81175845de9cb09d99b373a31e79528fccf9d"} Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.389750 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-m9rr7" podStartSLOduration=2.7740508139999998 podStartE2EDuration="6.389731587s" podCreationTimestamp="2026-01-27 18:26:36 +0000 UTC" firstStartedPulling="2026-01-27 18:26:37.978015362 +0000 UTC m=+1253.107297974" lastFinishedPulling="2026-01-27 18:26:41.593696125 +0000 UTC m=+1256.722978747" observedRunningTime="2026-01-27 18:26:42.38950981 +0000 UTC m=+1257.518792433" watchObservedRunningTime="2026-01-27 18:26:42.389731587 +0000 UTC m=+1257.519014209" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.654313 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.654709 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:42 crc kubenswrapper[4907]: I0127 18:26:42.791459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.471508 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.771448 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 18:26:43 crc kubenswrapper[4907]: I0127 18:26:43.852413 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.195389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195546 4907 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195563 4907 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: E0127 18:26:44.195635 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift podName:df7e986b-1dca-4795-85f7-e62cdd92d995 nodeName:}" failed. No retries permitted until 2026-01-27 18:26:52.195596932 +0000 UTC m=+1267.324879544 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift") pod "swift-storage-0" (UID: "df7e986b-1dca-4795-85f7-e62cdd92d995") : configmap "swift-ring-files" not found Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"f956f9ac8e59b89fd0166993283fa50a28c2a71b225081f9bd124cf61f3864e4"} Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389395 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c4f5ec64-0863-45ef-9090-4768ecd34667","Type":"ContainerStarted","Data":"2200498760d91a27012199a92a489237ade2905e1a6c76991c1e31ca7468d9f8"} Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.389459 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.421923 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.24821637 podStartE2EDuration="4.421905063s" podCreationTimestamp="2026-01-27 18:26:40 +0000 UTC" firstStartedPulling="2026-01-27 18:26:41.992960777 +0000 UTC m=+1257.122243389" lastFinishedPulling="2026-01-27 18:26:43.16664946 +0000 UTC m=+1258.295932082" observedRunningTime="2026-01-27 18:26:44.40993122 +0000 UTC m=+1259.539213842" watchObservedRunningTime="2026-01-27 18:26:44.421905063 +0000 UTC m=+1259.551187675" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.656975 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.658361 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.665861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.705147 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.705226 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.777702 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.784539 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.791109 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.808058 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.808630 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.810893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.811414 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.841965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"mysqld-exporter-openstack-db-create-vqsnx\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.910814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.911047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:44 crc kubenswrapper[4907]: I0127 18:26:44.977681 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.014483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.014950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.016146 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.035019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"mysqld-exporter-1e4c-account-create-update-9hkjc\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.066711 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.112075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.127622 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.127877 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" containerID="cri-o://d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" gracePeriod=10 Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.432039 4907 generic.go:334] "Generic (PLEG): container finished" podID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerID="d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" exitCode=0 Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.432130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5"} Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.808394 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.882775 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.886773 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 18:26:45 crc kubenswrapper[4907]: I0127 18:26:45.969145 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.046911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.047031 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.047077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") pod \"bfcec505-2d02-4a43-ae48-0861df2f3f03\" (UID: \"bfcec505-2d02-4a43-ae48-0861df2f3f03\") " Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.053888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896" (OuterVolumeSpecName: "kube-api-access-52896") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "kube-api-access-52896". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.100609 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.118081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config" (OuterVolumeSpecName: "config") pod "bfcec505-2d02-4a43-ae48-0861df2f3f03" (UID: "bfcec505-2d02-4a43-ae48-0861df2f3f03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149389 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52896\" (UniqueName: \"kubernetes.io/projected/bfcec505-2d02-4a43-ae48-0861df2f3f03-kube-api-access-52896\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149422 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.149432 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfcec505-2d02-4a43-ae48-0861df2f3f03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440530 4907 generic.go:334] "Generic (PLEG): container finished" podID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerID="4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16" exitCode=0 Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerDied","Data":"4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.440640 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerStarted","Data":"e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" event={"ID":"bfcec505-2d02-4a43-ae48-0861df2f3f03","Type":"ContainerDied","Data":"6f8372a96157a4b3bb9b594a1bb14b4dea21ae1a28e8793346ac6d1505a183aa"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445085 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jfqlq" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.445130 4907 scope.go:117] "RemoveContainer" containerID="d50ca368775142fa8baed9f94fc1a073a58fb7981b3dffe257a3949f6a0afda5" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449080 4907 generic.go:334] "Generic (PLEG): container finished" podID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerID="44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7" exitCode=0 Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449130 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerDied","Data":"44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.449167 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerStarted","Data":"fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579"} Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.480132 4907 scope.go:117] "RemoveContainer" containerID="912990bf5531c8d8dcf347a7806f2a0e43907aeff0a89bc9002b931b3fde59bf" Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.511967 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:46 crc kubenswrapper[4907]: I0127 18:26:46.522157 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jfqlq"] Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.460498 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e" exitCode=0 Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.460574 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e"} Jan 27 18:26:47 crc kubenswrapper[4907]: I0127 18:26:47.759764 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" path="/var/lib/kubelet/pods/bfcec505-2d02-4a43-ae48-0861df2f3f03/volumes" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.092753 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.099492 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.101271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") pod \"5edef5c0-5919-4ddd-93cd-65b569c78603\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.101394 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") pod \"5edef5c0-5919-4ddd-93cd-65b569c78603\" (UID: \"5edef5c0-5919-4ddd-93cd-65b569c78603\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.102519 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5edef5c0-5919-4ddd-93cd-65b569c78603" (UID: "5edef5c0-5919-4ddd-93cd-65b569c78603"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.103885 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edef5c0-5919-4ddd-93cd-65b569c78603-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.109539 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c" (OuterVolumeSpecName: "kube-api-access-pmz2c") pod "5edef5c0-5919-4ddd-93cd-65b569c78603" (UID: "5edef5c0-5919-4ddd-93cd-65b569c78603"). InnerVolumeSpecName "kube-api-access-pmz2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205092 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") pod \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205362 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") pod \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\" (UID: \"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5\") " Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.205867 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmz2c\" (UniqueName: \"kubernetes.io/projected/5edef5c0-5919-4ddd-93cd-65b569c78603-kube-api-access-pmz2c\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.206016 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" (UID: "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.208849 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t" (OuterVolumeSpecName: "kube-api-access-j7m6t") pod "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" (UID: "ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5"). InnerVolumeSpecName "kube-api-access-j7m6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.307813 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.307846 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7m6t\" (UniqueName: \"kubernetes.io/projected/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5-kube-api-access-j7m6t\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.471286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" event={"ID":"ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5","Type":"ContainerDied","Data":"e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea"} Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.472789 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e510fc68341dc2fc028c26d1ed20355e9d9b8ea566b8dca71ca217457c0123ea" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.471351 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vqsnx" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" event={"ID":"5edef5c0-5919-4ddd-93cd-65b569c78603","Type":"ContainerDied","Data":"fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579"} Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473287 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcedb1b997edccc07f58b982468b3cad1417f9094c0aeed4a860a6bb871a9579" Jan 27 18:26:48 crc kubenswrapper[4907]: I0127 18:26:48.473291 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1e4c-account-create-update-9hkjc" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.484330 4907 generic.go:334] "Generic (PLEG): container finished" podID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerID="ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b" exitCode=0 Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.484433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerDied","Data":"ca86b15571e895f81aef7824a3bc0577e2b5583c21fc0cc8937be053cc06092b"} Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.903820 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904674 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904696 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904712 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904720 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904736 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="init" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904742 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="init" Jan 27 18:26:49 crc kubenswrapper[4907]: E0127 18:26:49.904777 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904782 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.904962 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcec505-2d02-4a43-ae48-0861df2f3f03" containerName="dnsmasq-dns" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905009 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" containerName="mariadb-account-create-update" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905018 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" containerName="mariadb-database-create" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.905772 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.911802 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.926026 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.961346 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:49 crc kubenswrapper[4907]: I0127 18:26:49.961479 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.063394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.063513 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.064415 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.091214 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"root-account-create-update-pk78t\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.176870 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.178752 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.189440 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.227327 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.267352 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.267493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.290485 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.292500 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.296818 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.301385 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374776 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.374884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.375741 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.395536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"mysqld-exporter-openstack-cell1-db-create-kpsck\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.477426 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.477561 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.478437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.501725 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.507373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"mysqld-exporter-69b7-account-create-update-6pfhq\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.675031 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.733771 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.909728 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990667 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990773 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.990874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991017 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991073 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991174 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.991321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") pod \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\" (UID: \"a5ce2510-00de-4a5b-8d9d-578b21229c8c\") " Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.992201 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.992835 4907 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.993304 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:50 crc kubenswrapper[4907]: I0127 18:26:50.998044 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5" (OuterVolumeSpecName: "kube-api-access-qfpt5") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "kube-api-access-qfpt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.017882 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.033327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts" (OuterVolumeSpecName: "scripts") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.039501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.040664 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a5ce2510-00de-4a5b-8d9d-578b21229c8c" (UID: "a5ce2510-00de-4a5b-8d9d-578b21229c8c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:51 crc kubenswrapper[4907]: W0127 18:26:51.054927 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1662136_4082_412a_9846_92ea9aff9350.slice/crio-03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7 WatchSource:0}: Error finding container 03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7: Status 404 returned error can't find the container with id 03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.060293 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:26:51 crc kubenswrapper[4907]: W0127 18:26:51.071552 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0adeee4_a225_49f2_8a87_f44aa772d5f2.slice/crio-0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892 WatchSource:0}: Error finding container 0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892: Status 404 returned error can't find the container with id 0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.072244 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094380 4907 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094409 4907 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a5ce2510-00de-4a5b-8d9d-578b21229c8c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094419 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpt5\" (UniqueName: \"kubernetes.io/projected/a5ce2510-00de-4a5b-8d9d-578b21229c8c-kube-api-access-qfpt5\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094429 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce2510-00de-4a5b-8d9d-578b21229c8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094437 4907 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.094444 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ce2510-00de-4a5b-8d9d-578b21229c8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.501858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m9rr7" event={"ID":"a5ce2510-00de-4a5b-8d9d-578b21229c8c","Type":"ContainerDied","Data":"78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.502245 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c8083b39a7e04db9a82458faba7988d1a9a9c438bac13993341986efc58151" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.502308 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m9rr7" Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.505603 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerStarted","Data":"e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.505642 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerStarted","Data":"03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.509968 4907 generic.go:334] "Generic (PLEG): container finished" podID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerID="bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c" exitCode=0 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.510047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerDied","Data":"bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.510074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerStarted","Data":"0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512511 4907 generic.go:334] "Generic (PLEG): container finished" podID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerID="3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371" exitCode=0 Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512548 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerDied","Data":"3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371"} Jan 27 18:26:51 crc kubenswrapper[4907]: I0127 18:26:51.512618 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerStarted","Data":"742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233"} Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.222273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.243493 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7e986b-1dca-4795-85f7-e62cdd92d995-etc-swift\") pod \"swift-storage-0\" (UID: \"df7e986b-1dca-4795-85f7-e62cdd92d995\") " pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.244070 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.525954 4907 generic.go:334] "Generic (PLEG): container finished" podID="e1662136-4082-412a-9846-92ea9aff9350" containerID="e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b" exitCode=0 Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.526021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerDied","Data":"e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b"} Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.665367 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:52 crc kubenswrapper[4907]: E0127 18:26:52.665855 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.665866 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.666052 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ce2510-00de-4a5b-8d9d-578b21229c8c" containerName="swift-ring-rebalance" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.666722 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.679828 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.732894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.732936 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.779898 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.781112 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.783658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.798648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834613 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834784 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834822 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.834934 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.836338 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.858205 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"keystone-db-create-qdj7p\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.938596 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.938707 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.942598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.956883 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"keystone-214c-account-create-update-5x6dm\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.988454 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.988996 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.989401 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-65dccccccb-km74l" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" containerID="cri-o://73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" gracePeriod=14 Jan 27 18:26:52 crc kubenswrapper[4907]: I0127 18:26:52.990048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.001350 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.044885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.045038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.074480 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.077175 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.079100 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.086637 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.106491 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.147364 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.147453 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.148861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.148947 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.149081 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.167080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"placement-db-create-z8s67\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.259120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.259557 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.260450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.281624 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"placement-c84c-account-create-update-4ld5d\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.325549 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.346801 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.348238 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.363489 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.409199 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.444379 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.445997 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.447962 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.457501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.462383 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.462471 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539061 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539733 4907 generic.go:334] "Generic (PLEG): container finished" podID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerID="73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" exitCode=2 Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.539801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerDied","Data":"73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45"} Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564171 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.564380 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.565615 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.586590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"glance-db-create-9r669\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.666399 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.666544 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.667430 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.681930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.682889 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"glance-0abc-account-create-update-gwjft\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:53 crc kubenswrapper[4907]: I0127 18:26:53.769484 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.561966 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" event={"ID":"a0adeee4-a225-49f2-8a87-f44aa772d5f2","Type":"ContainerDied","Data":"0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.562284 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bdd46c961fbd6bf28363b4c06a241cfc301504df37dd4470971e7c57a698892" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.565453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pk78t" event={"ID":"6595f747-432a-4afe-ad8c-fd3f44fa85e6","Type":"ContainerDied","Data":"742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.565531 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742f1640d1937fd1cdb36c1320b8e2c7bc39f79ae110bf29cfd954a805839233" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.567133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" event={"ID":"e1662136-4082-412a-9846-92ea9aff9350","Type":"ContainerDied","Data":"03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7"} Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.567158 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e08ea5f5f486ed9f77e9e83df8019c530c00418aca2b4e66e8135742f090f7" Jan 27 18:26:54 crc kubenswrapper[4907]: I0127 18:26:54.630441 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.664828 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.675881 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.692139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") pod \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.692231 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") pod \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\" (UID: \"a0adeee4-a225-49f2-8a87-f44aa772d5f2\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.693533 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0adeee4-a225-49f2-8a87-f44aa772d5f2" (UID: "a0adeee4-a225-49f2-8a87-f44aa772d5f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.703282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm" (OuterVolumeSpecName: "kube-api-access-wqtnm") pod "a0adeee4-a225-49f2-8a87-f44aa772d5f2" (UID: "a0adeee4-a225-49f2-8a87-f44aa772d5f2"). InnerVolumeSpecName "kube-api-access-wqtnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.796679 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") pod \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.796984 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") pod \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\" (UID: \"6595f747-432a-4afe-ad8c-fd3f44fa85e6\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797003 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") pod \"e1662136-4082-412a-9846-92ea9aff9350\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") pod \"e1662136-4082-412a-9846-92ea9aff9350\" (UID: \"e1662136-4082-412a-9846-92ea9aff9350\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797640 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0adeee4-a225-49f2-8a87-f44aa772d5f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.797668 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtnm\" (UniqueName: \"kubernetes.io/projected/a0adeee4-a225-49f2-8a87-f44aa772d5f2-kube-api-access-wqtnm\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.798030 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1662136-4082-412a-9846-92ea9aff9350" (UID: "e1662136-4082-412a-9846-92ea9aff9350"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.800933 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z" (OuterVolumeSpecName: "kube-api-access-dnz9z") pod "e1662136-4082-412a-9846-92ea9aff9350" (UID: "e1662136-4082-412a-9846-92ea9aff9350"). InnerVolumeSpecName "kube-api-access-dnz9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.805020 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2" (OuterVolumeSpecName: "kube-api-access-ppbs2") pod "6595f747-432a-4afe-ad8c-fd3f44fa85e6" (UID: "6595f747-432a-4afe-ad8c-fd3f44fa85e6"). InnerVolumeSpecName "kube-api-access-ppbs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.805466 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6595f747-432a-4afe-ad8c-fd3f44fa85e6" (UID: "6595f747-432a-4afe-ad8c-fd3f44fa85e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.826759 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.826839 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.899420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.899941 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900093 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900327 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.900479 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") pod \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\" (UID: \"4e69063c-9ede-4474-9fd3-b16db60b9a7c\") " Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901006 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config" (OuterVolumeSpecName: "console-config") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901037 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.901051 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903191 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6595f747-432a-4afe-ad8c-fd3f44fa85e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903286 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903331 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1662136-4082-412a-9846-92ea9aff9350-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903349 4907 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903361 4907 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903374 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnz9z\" (UniqueName: \"kubernetes.io/projected/e1662136-4082-412a-9846-92ea9aff9350-kube-api-access-dnz9z\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903390 4907 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.903406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbs2\" (UniqueName: \"kubernetes.io/projected/6595f747-432a-4afe-ad8c-fd3f44fa85e6-kube-api-access-ppbs2\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.904203 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw" (OuterVolumeSpecName: "kube-api-access-bglhw") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "kube-api-access-bglhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.904691 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:54.905470 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4e69063c-9ede-4474-9fd3-b16db60b9a7c" (UID: "4e69063c-9ede-4474-9fd3-b16db60b9a7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005358 4907 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005398 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bglhw\" (UniqueName: \"kubernetes.io/projected/4e69063c-9ede-4474-9fd3-b16db60b9a7c-kube-api-access-bglhw\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005411 4907 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e69063c-9ede-4474-9fd3-b16db60b9a7c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.005423 4907 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e69063c-9ede-4474-9fd3-b16db60b9a7c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.581624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb"} Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.583649 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dccccccb-km74l_4e69063c-9ede-4474-9fd3-b16db60b9a7c/console/0.log" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.583744 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kpsck" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586621 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dccccccb-km74l" event={"ID":"4e69063c-9ede-4474-9fd3-b16db60b9a7c","Type":"ContainerDied","Data":"d8e779a538fd171e62688ea894409db54be158536da7dede33422f76801c0085"} Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586684 4907 scope.go:117] "RemoveContainer" containerID="73c9e83ba8aebe975e11ef4d07847ef42cb88307c5cad2d2f3f2c241d0b95d45" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586711 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-69b7-account-create-update-6pfhq" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586783 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dccccccb-km74l" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.586815 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pk78t" Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.631743 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.640092 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-65dccccccb-km74l"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.761331 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" path="/var/lib/kubelet/pods/4e69063c-9ede-4474-9fd3-b16db60b9a7c/volumes" Jan 27 18:26:55 crc kubenswrapper[4907]: W0127 18:26:55.819090 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfbf931_f21b_4652_8640_0208df4b40cc.slice/crio-b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712 WatchSource:0}: Error finding container b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712: Status 404 returned error can't find the container with id b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712 Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.840627 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:26:55 crc kubenswrapper[4907]: I0127 18:26:55.864306 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.052627 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.068648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.072448 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1904c81_5de8_431a_9304_5b4ba1771c73.slice/crio-c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01 WatchSource:0}: Error finding container c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01: Status 404 returned error can't find the container with id c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01 Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.075572 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c433c1_ca56_4d2d_ac7b_0f2ceadcaf8d.slice/crio-777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea WatchSource:0}: Error finding container 777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea: Status 404 returned error can't find the container with id 777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.081423 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.100415 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.179211 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:26:56 crc kubenswrapper[4907]: W0127 18:26:56.185756 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf7e986b_1dca_4795_85f7_e62cdd92d995.slice/crio-1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2 WatchSource:0}: Error finding container 1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2: Status 404 returned error can't find the container with id 1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.273419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.283414 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pk78t"] Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.594722 4907 generic.go:334] "Generic (PLEG): container finished" podID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerID="4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.594770 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerDied","Data":"4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.595198 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerStarted","Data":"777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.598331 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"1092639fa0f80a1be62bda37d7da1cd82148d73c463fcd3359f6c4e3bb1163f2"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600117 4907 generic.go:334] "Generic (PLEG): container finished" podID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerID="7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600199 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerDied","Data":"7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.600232 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerStarted","Data":"17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602425 4907 generic.go:334] "Generic (PLEG): container finished" podID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerID="a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602484 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerDied","Data":"a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.602513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerStarted","Data":"246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604495 4907 generic.go:334] "Generic (PLEG): container finished" podID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerID="c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604539 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerDied","Data":"c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.604595 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerStarted","Data":"b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613077 4907 generic.go:334] "Generic (PLEG): container finished" podID="d7319b76-e25b-4370-ac3e-641efd764024" containerID="10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613165 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerDied","Data":"10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.613203 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerStarted","Data":"81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615852 4907 generic.go:334] "Generic (PLEG): container finished" podID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerID="96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246" exitCode=0 Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerDied","Data":"96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246"} Jan 27 18:26:56 crc kubenswrapper[4907]: I0127 18:26:56.615959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerStarted","Data":"c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01"} Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.640526 4907 generic.go:334] "Generic (PLEG): container finished" podID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerID="f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6" exitCode=0 Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.640878 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6"} Jan 27 18:26:57 crc kubenswrapper[4907]: I0127 18:26:57.825606 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" path="/var/lib/kubelet/pods/6595f747-432a-4afe-ad8c-fd3f44fa85e6/volumes" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.263407 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.404435 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") pod \"3dfbf931-f21b-4652-8640-0208df4b40cc\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.404614 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") pod \"3dfbf931-f21b-4652-8640-0208df4b40cc\" (UID: \"3dfbf931-f21b-4652-8640-0208df4b40cc\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.409524 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr" (OuterVolumeSpecName: "kube-api-access-n7kjr") pod "3dfbf931-f21b-4652-8640-0208df4b40cc" (UID: "3dfbf931-f21b-4652-8640-0208df4b40cc"). InnerVolumeSpecName "kube-api-access-n7kjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.410024 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dfbf931-f21b-4652-8640-0208df4b40cc" (UID: "3dfbf931-f21b-4652-8640-0208df4b40cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.410870 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.457257 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.492488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506159 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") pod \"d7319b76-e25b-4370-ac3e-641efd764024\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506388 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") pod \"d7319b76-e25b-4370-ac3e-641efd764024\" (UID: \"d7319b76-e25b-4370-ac3e-641efd764024\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.506958 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7319b76-e25b-4370-ac3e-641efd764024" (UID: "d7319b76-e25b-4370-ac3e-641efd764024"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507243 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kjr\" (UniqueName: \"kubernetes.io/projected/3dfbf931-f21b-4652-8640-0208df4b40cc-kube-api-access-n7kjr\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507321 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7319b76-e25b-4370-ac3e-641efd764024-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.507450 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfbf931-f21b-4652-8640-0208df4b40cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.515170 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9" (OuterVolumeSpecName: "kube-api-access-f2pt9") pod "d7319b76-e25b-4370-ac3e-641efd764024" (UID: "d7319b76-e25b-4370-ac3e-641efd764024"). InnerVolumeSpecName "kube-api-access-f2pt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609321 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") pod \"f1904c81-5de8-431a-9304-5b4ba1771c73\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609811 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") pod \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.609973 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") pod \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\" (UID: \"0ef0a2ee-9212-41c9-b2b9-d59602779eef\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610086 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1904c81-5de8-431a-9304-5b4ba1771c73" (UID: "f1904c81-5de8-431a-9304-5b4ba1771c73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610160 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") pod \"f1904c81-5de8-431a-9304-5b4ba1771c73\" (UID: \"f1904c81-5de8-431a-9304-5b4ba1771c73\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.610913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef0a2ee-9212-41c9-b2b9-d59602779eef" (UID: "0ef0a2ee-9212-41c9-b2b9-d59602779eef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611052 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1904c81-5de8-431a-9304-5b4ba1771c73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611098 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2pt9\" (UniqueName: \"kubernetes.io/projected/d7319b76-e25b-4370-ac3e-641efd764024-kube-api-access-f2pt9\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.611123 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0a2ee-9212-41c9-b2b9-d59602779eef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.614292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl" (OuterVolumeSpecName: "kube-api-access-zh2rl") pod "f1904c81-5de8-431a-9304-5b4ba1771c73" (UID: "f1904c81-5de8-431a-9304-5b4ba1771c73"). InnerVolumeSpecName "kube-api-access-zh2rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.614663 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46" (OuterVolumeSpecName: "kube-api-access-4sh46") pod "0ef0a2ee-9212-41c9-b2b9-d59602779eef" (UID: "0ef0a2ee-9212-41c9-b2b9-d59602779eef"). InnerVolumeSpecName "kube-api-access-4sh46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.660941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-214c-account-create-update-5x6dm" event={"ID":"3dfbf931-f21b-4652-8640-0208df4b40cc","Type":"ContainerDied","Data":"b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.660980 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b992f75a84ade7d142ee88f0cae31268ac506e2cda17de3f171724de369df712" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.661038 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-214c-account-create-update-5x6dm" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667459 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0abc-account-create-update-gwjft" event={"ID":"d7319b76-e25b-4370-ac3e-641efd764024","Type":"ContainerDied","Data":"81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667513 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e9a0b128516c273908688dfb83a6c596d1f868fadf9a8d45bb6728270ec6e8" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.667606 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0abc-account-create-update-gwjft" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673045 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdj7p" event={"ID":"f1904c81-5de8-431a-9304-5b4ba1771c73","Type":"ContainerDied","Data":"c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673123 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11096592ce1a12d508211f6240e829bf4112cc6bdb293894538f7d40695ce01" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.673243 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdj7p" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.679301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"7706857a2eda31b255c275fc07115ff6bbb35d1894888292c20b83bc5cda73d4"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685287 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c84c-account-create-update-4ld5d" event={"ID":"0ef0a2ee-9212-41c9-b2b9-d59602779eef","Type":"ContainerDied","Data":"246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6"} Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685331 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246c6f82a91c258ac80288f958a885d0bd43f6f1fc6a0f307cb42d21345d9db6" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.685400 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c84c-account-create-update-4ld5d" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.712948 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh2rl\" (UniqueName: \"kubernetes.io/projected/f1904c81-5de8-431a-9304-5b4ba1771c73-kube-api-access-zh2rl\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.712968 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sh46\" (UniqueName: \"kubernetes.io/projected/0ef0a2ee-9212-41c9-b2b9-d59602779eef-kube-api-access-4sh46\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.738669 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.778591 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-96prz" podUID="daaea3c0-a88d-442f-be06-bb95b2825fcc" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:26:58 crc kubenswrapper[4907]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:26:58 crc kubenswrapper[4907]: > Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.801515 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.815450 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") pod \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.815524 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") pod \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\" (UID: \"94f0fdef-b14b-4204-be1e-90a5d19c96e7\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.816485 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94f0fdef-b14b-4204-be1e-90a5d19c96e7" (UID: "94f0fdef-b14b-4204-be1e-90a5d19c96e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.820168 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c" (OuterVolumeSpecName: "kube-api-access-bdq4c") pod "94f0fdef-b14b-4204-be1e-90a5d19c96e7" (UID: "94f0fdef-b14b-4204-be1e-90a5d19c96e7"). InnerVolumeSpecName "kube-api-access-bdq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.825219 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.828229 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2q6jk" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.916880 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") pod \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.917235 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") pod \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\" (UID: \"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d\") " Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.917398 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" (UID: "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919929 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919960 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdq4c\" (UniqueName: \"kubernetes.io/projected/94f0fdef-b14b-4204-be1e-90a5d19c96e7-kube-api-access-bdq4c\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.919974 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f0fdef-b14b-4204-be1e-90a5d19c96e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:58 crc kubenswrapper[4907]: I0127 18:26:58.922595 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg" (OuterVolumeSpecName: "kube-api-access-82plg") pod "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" (UID: "84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d"). InnerVolumeSpecName "kube-api-access-82plg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.022926 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82plg\" (UniqueName: \"kubernetes.io/projected/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d-kube-api-access-82plg\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.156907 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157298 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157320 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157327 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157342 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157348 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157366 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157371 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157387 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157393 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157404 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157410 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157422 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157428 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157441 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157452 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157458 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: E0127 18:26:59.157465 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157471 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157662 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157671 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157690 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1662136-4082-412a-9846-92ea9aff9350" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157702 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6595f747-432a-4afe-ad8c-fd3f44fa85e6" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157714 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7319b76-e25b-4370-ac3e-641efd764024" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157726 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e69063c-9ede-4474-9fd3-b16db60b9a7c" containerName="console" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157735 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157746 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" containerName="mariadb-account-create-update" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.157757 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" containerName="mariadb-database-create" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.158648 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.162064 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.172030 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331134 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331447 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.331502 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433854 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.433964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.434038 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.434080 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.435900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.435979 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.436034 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.436280 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.437387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.458205 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"ovn-controller-96prz-config-rtl4s\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.480445 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.699805 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerStarted","Data":"7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.700265 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702881 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9r669" event={"ID":"84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d","Type":"ContainerDied","Data":"777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702924 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777868a2dc834b59c6724cb5a2cf709431723baebc15170f52c29a2b15406bea" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.702980 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9r669" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.717764 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"f6a8c2a03beefa353abd26d99a6f91a1eb474d911b85150474113b8b7404af6e"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.722231 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z8s67" event={"ID":"94f0fdef-b14b-4204-be1e-90a5d19c96e7","Type":"ContainerDied","Data":"17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.722473 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d014ea0685c8867c39231eec5d1467fcf85b1648296fd7fed465a508647fea" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.723003 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z8s67" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.744832 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa"} Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.751494 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.120337723 podStartE2EDuration="1m1.751471758s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.994522928 +0000 UTC m=+1229.123805540" lastFinishedPulling="2026-01-27 18:26:22.625656963 +0000 UTC m=+1237.754939575" observedRunningTime="2026-01-27 18:26:59.734542913 +0000 UTC m=+1274.863825525" watchObservedRunningTime="2026-01-27 18:26:59.751471758 +0000 UTC m=+1274.880754380" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.910917 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.913195 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.916568 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.928283 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:26:59 crc kubenswrapper[4907]: I0127 18:26:59.977175 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.056041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.056148 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.157731 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.158188 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.159112 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.177988 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"root-account-create-update-4m58q\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.272769 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.395907 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.398740 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.405179 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.413471 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565356 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565740 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.565791 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.667755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.667836 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.668295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.674386 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.674942 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.685238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"mysqld-exporter-0\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.758057 4907 generic.go:334] "Generic (PLEG): container finished" podID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerID="6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.758403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.762265 4907 generic.go:334] "Generic (PLEG): container finished" podID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerID="9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.762366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.768910 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"fc62989a2170f69bb2ca0015679f24a3a2e45c2c211bcc5ef6b0ed5f362736d1"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.768977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"6fb382f51c20c62e57b816a0aedce41d2313c9641951bf7b79c5dc7f9169ab53"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.778818 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerStarted","Data":"eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.779047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerStarted","Data":"15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.780463 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.788808 4907 generic.go:334] "Generic (PLEG): container finished" podID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerID="008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e" exitCode=0 Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.788880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e"} Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.839927 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz-config-rtl4s" podStartSLOduration=1.839898996 podStartE2EDuration="1.839898996s" podCreationTimestamp="2026-01-27 18:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:00.814277412 +0000 UTC m=+1275.943560014" watchObservedRunningTime="2026-01-27 18:27:00.839898996 +0000 UTC m=+1275.969181608" Jan 27 18:27:00 crc kubenswrapper[4907]: I0127 18:27:00.854228 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:00 crc kubenswrapper[4907]: W0127 18:27:00.859343 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b62675_d164_4a1a_b3a3_e21eda5b7190.slice/crio-059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04 WatchSource:0}: Error finding container 059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04: Status 404 returned error can't find the container with id 059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.110935 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.332091 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:27:01 crc kubenswrapper[4907]: W0127 18:27:01.656335 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0791214_d591_446c_a64a_e1e0f237392e.slice/crio-77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c WatchSource:0}: Error finding container 77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c: Status 404 returned error can't find the container with id 77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.815054 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerStarted","Data":"4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.815282 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.818259 4907 generic.go:334] "Generic (PLEG): container finished" podID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerID="eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b" exitCode=0 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.818335 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerDied","Data":"eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820049 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerID="16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed" exitCode=0 Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerDied","Data":"16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.820115 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerStarted","Data":"059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.821912 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerStarted","Data":"dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.822680 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.823574 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerStarted","Data":"77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.825487 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerStarted","Data":"435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406"} Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.825830 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.841475 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=55.129481049 podStartE2EDuration="1m3.841456283s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.913796763 +0000 UTC m=+1229.043079375" lastFinishedPulling="2026-01-27 18:26:22.625771997 +0000 UTC m=+1237.755054609" observedRunningTime="2026-01-27 18:27:01.834628927 +0000 UTC m=+1276.963911539" watchObservedRunningTime="2026-01-27 18:27:01.841456283 +0000 UTC m=+1276.970738895" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.861088 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.493836948 podStartE2EDuration="1m3.861069915s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.844264839 +0000 UTC m=+1228.973547451" lastFinishedPulling="2026-01-27 18:26:23.211497766 +0000 UTC m=+1238.340780418" observedRunningTime="2026-01-27 18:27:01.85460168 +0000 UTC m=+1276.983884292" watchObservedRunningTime="2026-01-27 18:27:01.861069915 +0000 UTC m=+1276.990352527" Jan 27 18:27:01 crc kubenswrapper[4907]: I0127 18:27:01.922281 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=54.696237644 podStartE2EDuration="1m3.922257941s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="2026-01-27 18:26:13.985655844 +0000 UTC m=+1229.114938456" lastFinishedPulling="2026-01-27 18:26:23.211676131 +0000 UTC m=+1238.340958753" observedRunningTime="2026-01-27 18:27:01.894129764 +0000 UTC m=+1277.023412386" watchObservedRunningTime="2026-01-27 18:27:01.922257941 +0000 UTC m=+1277.051540553" Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889151 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"3bf18378195508547b70b89096bf372a7dc672ffe170cbac40d99307c96a1858"} Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889400 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"2dfba7a355e582c2581f135d01bb43a9840b2c192c5a3f3869769f73c76aaadf"} Jan 27 18:27:02 crc kubenswrapper[4907]: I0127 18:27:02.889410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"5bdd99948c581772fceb902e534dba55a63f64d1b33c3ce75125e4ee4aa82231"} Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.740391 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.742968 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.746493 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gmz5" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.746816 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779345 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779388 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.779518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.812522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881765 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881868 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.881985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.891387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.918465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.919473 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-96prz" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.920163 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:03 crc kubenswrapper[4907]: I0127 18:27:03.921341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"glance-db-sync-d856z\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " pod="openstack/glance-db-sync-d856z" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.089329 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.615182 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.639227 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704332 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") pod \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704417 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704515 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704788 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704875 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") pod \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\" (UID: \"a6b62675-d164-4a1a-b3a3-e21eda5b7190\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704884 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run" (OuterVolumeSpecName: "var-run") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.704912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") pod \"abd14b5b-15ac-4d30-8105-13f40a1edb77\" (UID: \"abd14b5b-15ac-4d30-8105-13f40a1edb77\") " Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705499 4907 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705811 4907 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.705827 4907 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd14b5b-15ac-4d30-8105-13f40a1edb77-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6b62675-d164-4a1a-b3a3-e21eda5b7190" (UID: "a6b62675-d164-4a1a-b3a3-e21eda5b7190"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.706519 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts" (OuterVolumeSpecName: "scripts") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.736723 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx" (OuterVolumeSpecName: "kube-api-access-5z2fx") pod "a6b62675-d164-4a1a-b3a3-e21eda5b7190" (UID: "a6b62675-d164-4a1a-b3a3-e21eda5b7190"). InnerVolumeSpecName "kube-api-access-5z2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.744728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24" (OuterVolumeSpecName: "kube-api-access-gcj24") pod "abd14b5b-15ac-4d30-8105-13f40a1edb77" (UID: "abd14b5b-15ac-4d30-8105-13f40a1edb77"). InnerVolumeSpecName "kube-api-access-gcj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812074 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6b62675-d164-4a1a-b3a3-e21eda5b7190-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812109 4907 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812119 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2fx\" (UniqueName: \"kubernetes.io/projected/a6b62675-d164-4a1a-b3a3-e21eda5b7190-kube-api-access-5z2fx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812129 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd14b5b-15ac-4d30-8105-13f40a1edb77-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.812138 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcj24\" (UniqueName: \"kubernetes.io/projected/abd14b5b-15ac-4d30-8105-13f40a1edb77-kube-api-access-gcj24\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921152 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-rtl4s" event={"ID":"abd14b5b-15ac-4d30-8105-13f40a1edb77","Type":"ContainerDied","Data":"15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a7ecb822aef91fc03055f2782820ad477ef8d9526264f5695c3e74dfdf2cde" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.921249 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-rtl4s" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4m58q" event={"ID":"a6b62675-d164-4a1a-b3a3-e21eda5b7190","Type":"ContainerDied","Data":"059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951614 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059801fdf4ed9ce0785e5846a3582563c0c4cfafd7ee2c92d3ef9425540dec04" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.951691 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4m58q" Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.961890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerStarted","Data":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} Jan 27 18:27:04 crc kubenswrapper[4907]: I0127 18:27:04.975107 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"0b11668a2479dd38f2210be73282a21393f490a4643d43ddb56a8e4c5a9f584a"} Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.005596 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.13421466 podStartE2EDuration="5.005572285s" podCreationTimestamp="2026-01-27 18:27:00 +0000 UTC" firstStartedPulling="2026-01-27 18:27:01.693813908 +0000 UTC m=+1276.823096520" lastFinishedPulling="2026-01-27 18:27:04.565171533 +0000 UTC m=+1279.694454145" observedRunningTime="2026-01-27 18:27:04.983737558 +0000 UTC m=+1280.113020170" watchObservedRunningTime="2026-01-27 18:27:05.005572285 +0000 UTC m=+1280.134854897" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.467069 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.747316 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.759178 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96prz-config-rtl4s"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.842911 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:05 crc kubenswrapper[4907]: E0127 18:27:05.843363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843384 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: E0127 18:27:05.843399 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843406 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843652 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" containerName="mariadb-account-create-update" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.843681 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" containerName="ovn-config" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.844366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.848260 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.893213 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.946836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947615 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947840 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947915 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:05 crc kubenswrapper[4907]: I0127 18:27:05.947992 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.049394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050173 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.049916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050332 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050853 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.050887 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.051379 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.053176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.095275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"ovn-controller-96prz-config-tbhbh\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.157964 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.284145 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:06 crc kubenswrapper[4907]: I0127 18:27:06.292445 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4m58q"] Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.000820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerStarted","Data":"f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b"} Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.446208 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:07 crc kubenswrapper[4907]: W0127 18:27:07.597587 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c6a42d_7a62_485f_8700_55b962892c25.slice/crio-f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee WatchSource:0}: Error finding container f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee: Status 404 returned error can't find the container with id f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.762415 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b62675-d164-4a1a-b3a3-e21eda5b7190" path="/var/lib/kubelet/pods/a6b62675-d164-4a1a-b3a3-e21eda5b7190/volumes" Jan 27 18:27:07 crc kubenswrapper[4907]: I0127 18:27:07.763151 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd14b5b-15ac-4d30-8105-13f40a1edb77" path="/var/lib/kubelet/pods/abd14b5b-15ac-4d30-8105-13f40a1edb77/volumes" Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.012176 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerStarted","Data":"dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.012225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerStarted","Data":"f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.014629 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerStarted","Data":"f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903"} Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.031229 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96prz-config-tbhbh" podStartSLOduration=3.031209474 podStartE2EDuration="3.031209474s" podCreationTimestamp="2026-01-27 18:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:08.026925802 +0000 UTC m=+1283.156208424" watchObservedRunningTime="2026-01-27 18:27:08.031209474 +0000 UTC m=+1283.160492086" Jan 27 18:27:08 crc kubenswrapper[4907]: I0127 18:27:08.055616 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.860779106 podStartE2EDuration="32.055578713s" podCreationTimestamp="2026-01-27 18:26:36 +0000 UTC" firstStartedPulling="2026-01-27 18:26:47.463484801 +0000 UTC m=+1262.592767413" lastFinishedPulling="2026-01-27 18:27:07.658284408 +0000 UTC m=+1282.787567020" observedRunningTime="2026-01-27 18:27:08.051387123 +0000 UTC m=+1283.180669735" watchObservedRunningTime="2026-01-27 18:27:08.055578713 +0000 UTC m=+1283.184861325" Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.027082 4907 generic.go:334] "Generic (PLEG): container finished" podID="31c6a42d-7a62-485f-8700-55b962892c25" containerID="dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619" exitCode=0 Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.027305 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerDied","Data":"dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034658 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"35be2fdfabf0064873b97bf963c92542f99a5d337ab5074770d11482006e4e37"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034700 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"619472d3a4c0ca4aee907bfc905d02d88a8d090c287d29372d17cedfaa2c2d13"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"e7ad65ecf9bcaf9b82557351175feb182542e53ab6291b28e94b806be1ba7d1e"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.034728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"6153749aa95cc3a10194e64676b36e462ca981a794b90a942ccf1db841c40676"} Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.913752 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.991476 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:09 crc kubenswrapper[4907]: I0127 18:27:09.994391 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.000230 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.027296 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.037918 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.038247 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"9b96c9d8feb6c12fd8fbb0259bdfd119dc2c2f511c8da9fa4888aae48398b2ed"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"cf12bd9f5c25df971c7a14246c30a5ac64c41b52bcbfffc7cd7a68248cf07fe3"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.050784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7e986b-1dca-4795-85f7-e62cdd92d995","Type":"ContainerStarted","Data":"9234a800ab060fee0a1b2c8004047b45d0052eff7cbfbc654a596185a27b05d6"} Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.111494 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=23.197915179 podStartE2EDuration="35.111471719s" podCreationTimestamp="2026-01-27 18:26:35 +0000 UTC" firstStartedPulling="2026-01-27 18:26:56.190571176 +0000 UTC m=+1271.319853788" lastFinishedPulling="2026-01-27 18:27:08.104127716 +0000 UTC m=+1283.233410328" observedRunningTime="2026-01-27 18:27:10.092295979 +0000 UTC m=+1285.221578601" watchObservedRunningTime="2026-01-27 18:27:10.111471719 +0000 UTC m=+1285.240754331" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.143095 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.143253 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.144095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.172609 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"root-account-create-update-cszb7\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.320828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.438106 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.440134 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.441752 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448884 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448933 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.448977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.451140 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.452745 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.549918 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.549971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550058 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550145 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550189 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") pod \"31c6a42d-7a62-485f-8700-55b962892c25\" (UID: \"31c6a42d-7a62-485f-8700-55b962892c25\") " Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550431 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550588 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550663 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.550703 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.551796 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run" (OuterVolumeSpecName: "var-run") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.552613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.554816 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.554808 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts" (OuterVolumeSpecName: "scripts") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555447 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.555860 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.585175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"dnsmasq-dns-6d5b6d6b67-4b5qw\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.587377 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw" (OuterVolumeSpecName: "kube-api-access-nf5dw") pod "31c6a42d-7a62-485f-8700-55b962892c25" (UID: "31c6a42d-7a62-485f-8700-55b962892c25"). InnerVolumeSpecName "kube-api-access-nf5dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653420 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653817 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5dw\" (UniqueName: \"kubernetes.io/projected/31c6a42d-7a62-485f-8700-55b962892c25-kube-api-access-nf5dw\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653834 4907 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653845 4907 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653857 4907 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31c6a42d-7a62-485f-8700-55b962892c25-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.653866 4907 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31c6a42d-7a62-485f-8700-55b962892c25-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.765438 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:10 crc kubenswrapper[4907]: I0127 18:27:10.923039 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:10 crc kubenswrapper[4907]: W0127 18:27:10.963895 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887a018b_78e7_4ae0_9db1_8d6d236a0773.slice/crio-88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568 WatchSource:0}: Error finding container 88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568: Status 404 returned error can't find the container with id 88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568 Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067425 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96prz-config-tbhbh" event={"ID":"31c6a42d-7a62-485f-8700-55b962892c25","Type":"ContainerDied","Data":"f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee"} Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067752 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f555a3a1fd8fe32e8509511395ba068b6f02f6e3eb296cd91c9825035582d0ee" Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.067841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96prz-config-tbhbh" Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.070759 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerStarted","Data":"88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568"} Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.257372 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:11 crc kubenswrapper[4907]: W0127 18:27:11.265054 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519051cc_696d_4d4b_9dc1_a23d7689e7fc.slice/crio-781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d WatchSource:0}: Error finding container 781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d: Status 404 returned error can't find the container with id 781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.561590 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.579128 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96prz-config-tbhbh"] Jan 27 18:27:11 crc kubenswrapper[4907]: I0127 18:27:11.770389 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c6a42d-7a62-485f-8700-55b962892c25" path="/var/lib/kubelet/pods/31c6a42d-7a62-485f-8700-55b962892c25/volumes" Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.060904 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.090110 4907 generic.go:334] "Generic (PLEG): container finished" podID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerID="aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f" exitCode=0 Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.090199 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerDied","Data":"aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f"} Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.092799 4907 generic.go:334] "Generic (PLEG): container finished" podID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerID="6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e" exitCode=0 Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.092932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e"} Jan 27 18:27:12 crc kubenswrapper[4907]: I0127 18:27:12.093143 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerStarted","Data":"781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d"} Jan 27 18:27:13 crc kubenswrapper[4907]: I0127 18:27:13.105269 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerStarted","Data":"3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7"} Jan 27 18:27:13 crc kubenswrapper[4907]: I0127 18:27:13.135807 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podStartSLOduration=3.135790022 podStartE2EDuration="3.135790022s" podCreationTimestamp="2026-01-27 18:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:13.130761478 +0000 UTC m=+1288.260044100" watchObservedRunningTime="2026-01-27 18:27:13.135790022 +0000 UTC m=+1288.265072634" Jan 27 18:27:14 crc kubenswrapper[4907]: I0127 18:27:14.116112 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.641886 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.912759 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:27:19 crc kubenswrapper[4907]: I0127 18:27:19.928086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.022752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.767833 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.839230 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:20 crc kubenswrapper[4907]: I0127 18:27:20.839493 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" containerID="cri-o://2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" gracePeriod=10 Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.061742 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.065014 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.192694 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerID="2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" exitCode=0 Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.192816 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8"} Jan 27 18:27:22 crc kubenswrapper[4907]: I0127 18:27:22.194218 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.421056 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.421694 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5d9vz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-d856z_openstack(1e2cf5dd-be65-4237-b77e-9bcc84cd26de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:27:23 crc kubenswrapper[4907]: E0127 18:27:23.422955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-d856z" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.487105 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.537982 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") pod \"887a018b-78e7-4ae0-9db1-8d6d236a0773\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") pod \"887a018b-78e7-4ae0-9db1-8d6d236a0773\" (UID: \"887a018b-78e7-4ae0-9db1-8d6d236a0773\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "887a018b-78e7-4ae0-9db1-8d6d236a0773" (UID: "887a018b-78e7-4ae0-9db1-8d6d236a0773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.538721 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887a018b-78e7-4ae0-9db1-8d6d236a0773-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.566603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx" (OuterVolumeSpecName: "kube-api-access-6nkvx") pod "887a018b-78e7-4ae0-9db1-8d6d236a0773" (UID: "887a018b-78e7-4ae0-9db1-8d6d236a0773"). InnerVolumeSpecName "kube-api-access-6nkvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.640932 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkvx\" (UniqueName: \"kubernetes.io/projected/887a018b-78e7-4ae0-9db1-8d6d236a0773-kube-api-access-6nkvx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.824234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951634 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951839 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.951875 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") pod \"ef031d23-3f7c-40b7-b2f1-72863036ca69\" (UID: \"ef031d23-3f7c-40b7-b2f1-72863036ca69\") " Jan 27 18:27:23 crc kubenswrapper[4907]: I0127 18:27:23.957390 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx" (OuterVolumeSpecName: "kube-api-access-jgjcx") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "kube-api-access-jgjcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.004292 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.018636 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config" (OuterVolumeSpecName: "config") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.024978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.037738 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef031d23-3f7c-40b7-b2f1-72863036ca69" (UID: "ef031d23-3f7c-40b7-b2f1-72863036ca69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054171 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054211 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054226 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgjcx\" (UniqueName: \"kubernetes.io/projected/ef031d23-3f7c-40b7-b2f1-72863036ca69-kube-api-access-jgjcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054242 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.054252 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef031d23-3f7c-40b7-b2f1-72863036ca69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" event={"ID":"ef031d23-3f7c-40b7-b2f1-72863036ca69","Type":"ContainerDied","Data":"14af702ae586c0a32f8c72ffae79c9a4feed72f954b871e63a6bcedfd4970e82"} Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223628 4907 scope.go:117] "RemoveContainer" containerID="2662eb1dc76e3de2247cb485da0881450418dfc3189a9f830fe4a5c909bcefe8" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.223841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-d92b2" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225685 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cszb7" event={"ID":"887a018b-78e7-4ae0-9db1-8d6d236a0773","Type":"ContainerDied","Data":"88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568"} Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225734 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88967a7bb8e6cf5d2acb1d8fa3dff1d3103b3a6625b5021ece79b4cd2a9c5568" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.225708 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cszb7" Jan 27 18:27:24 crc kubenswrapper[4907]: E0127 18:27:24.226908 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-d856z" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.246718 4907 scope.go:117] "RemoveContainer" containerID="9210e052e557ee5db0c6cb854a5c34ba61fe01036174f4d90bbedc6157af149a" Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.296410 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:24 crc kubenswrapper[4907]: I0127 18:27:24.309739 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-d92b2"] Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605660 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605992 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" containerID="cri-o://f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.606048 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" containerID="cri-o://350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.605953 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" containerID="cri-o://2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" gracePeriod=600 Jan 27 18:27:25 crc kubenswrapper[4907]: I0127 18:27:25.779502 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" path="/var/lib/kubelet/pods/ef031d23-3f7c-40b7-b2f1-72863036ca69/volumes" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.248983 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249019 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249029 4907 generic.go:334] "Generic (PLEG): container finished" podID="07d384d2-43f4-4290-837f-fb784fc28b37" containerID="2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" exitCode=0 Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249285 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.249463 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb"} Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.347492 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.362022 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cszb7"] Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.524427 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.524488 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.739702 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817525 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817626 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817688 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817707 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817939 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.817976 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") pod \"07d384d2-43f4-4290-837f-fb784fc28b37\" (UID: \"07d384d2-43f4-4290-837f-fb784fc28b37\") " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818497 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818670 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.818766 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.819032 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825037 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825280 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out" (OuterVolumeSpecName: "config-out") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.825338 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config" (OuterVolumeSpecName: "config") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.831726 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.834722 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf" (OuterVolumeSpecName: "kube-api-access-8tslf") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "kube-api-access-8tslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.838748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "pvc-f7807fd9-6025-4711-8134-26e284a305f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.851953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config" (OuterVolumeSpecName: "web-config") pod "07d384d2-43f4-4290-837f-fb784fc28b37" (UID: "07d384d2-43f4-4290-837f-fb784fc28b37"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921532 4907 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921594 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921611 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tslf\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-kube-api-access-8tslf\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921621 4907 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/07d384d2-43f4-4290-837f-fb784fc28b37-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921630 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921686 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" " Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921699 4907 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/07d384d2-43f4-4290-837f-fb784fc28b37-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921708 4907 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/07d384d2-43f4-4290-837f-fb784fc28b37-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.921716 4907 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/07d384d2-43f4-4290-837f-fb784fc28b37-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.943376 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:27:26 crc kubenswrapper[4907]: I0127 18:27:26.943517 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7807fd9-6025-4711-8134-26e284a305f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6") on node "crc" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.023255 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"07d384d2-43f4-4290-837f-fb784fc28b37","Type":"ContainerDied","Data":"fc8f0cf918d2f1e4ee6543675a8a734f0eae6d553ac279fc26c4b3464bce8f68"} Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261640 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.261927 4907 scope.go:117] "RemoveContainer" containerID="f17aa6e5387f4b400d9300d898498f9c1c72f3c087dfae0d6458c79687bdd903" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.285435 4907 scope.go:117] "RemoveContainer" containerID="350e9673241bb71ce82a65cf04d4c261de68a045dd6387627fc7a3c8bcd317fa" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.303172 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.307969 4907 scope.go:117] "RemoveContainer" containerID="2e2c396046bd916e198432130b3c1ef49e128425c030605adab0e67ceca6b8eb" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.316913 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.329819 4907 scope.go:117] "RemoveContainer" containerID="0cb2b5617ca146cb8c79400c4a0bcad1760b35f1aee6925b3051c57fab16559e" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.354513 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355004 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355022 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355043 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355318 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355331 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355338 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355370 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355378 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355403 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="init-config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355409 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="init-config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355422 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="init" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355429 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="init" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355454 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355463 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: E0127 18:27:27.355481 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355489 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355748 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="thanos-sidecar" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355770 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="prometheus" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355786 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" containerName="mariadb-account-create-update" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355799 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" containerName="config-reloader" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355818 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c6a42d-7a62-485f-8700-55b962892c25" containerName="ovn-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.355833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef031d23-3f7c-40b7-b2f1-72863036ca69" containerName="dnsmasq-dns" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.357971 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.361635 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.361847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362017 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362278 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.362475 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v8l29" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363202 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363350 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.363539 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.382505 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.385373 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435833 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435871 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435949 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.435980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436016 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436063 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436079 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436106 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436126 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.436161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538433 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538517 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538681 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538724 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.538985 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539036 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539530 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539582 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.539492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.540613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9228204-5d32-47ea-9236-8ae3e4d5eebc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.542160 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543253 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543878 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.543913 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c10aa9d009ec60f264ba4aa31b8554e40bc9aa6367f517a78b05ac7bb1849b2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.545595 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.547063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.547682 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.557410 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.560498 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9228204-5d32-47ea-9236-8ae3e4d5eebc-config\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.561957 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdkc\" (UniqueName: \"kubernetes.io/projected/c9228204-5d32-47ea-9236-8ae3e4d5eebc-kube-api-access-8mdkc\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.595208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7807fd9-6025-4711-8134-26e284a305f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7807fd9-6025-4711-8134-26e284a305f6\") pod \"prometheus-metric-storage-0\" (UID: \"c9228204-5d32-47ea-9236-8ae3e4d5eebc\") " pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.736171 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.760897 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d384d2-43f4-4290-837f-fb784fc28b37" path="/var/lib/kubelet/pods/07d384d2-43f4-4290-837f-fb784fc28b37/volumes" Jan 27 18:27:27 crc kubenswrapper[4907]: I0127 18:27:27.761967 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887a018b-78e7-4ae0-9db1-8d6d236a0773" path="/var/lib/kubelet/pods/887a018b-78e7-4ae0-9db1-8d6d236a0773/volumes" Jan 27 18:27:28 crc kubenswrapper[4907]: I0127 18:27:28.389653 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.285458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"c5bd11e769f6116806aa8b861b9bf19795572d8b7ccbfb13f52cfc9c500b44fc"} Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.642846 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 18:27:29 crc kubenswrapper[4907]: I0127 18:27:29.927823 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.333081 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.334760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.344521 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.399141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.399210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501015 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.501925 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.536157 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.537305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"heat-db-create-jsvqc\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.538280 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.551770 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.575998 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.577366 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.583680 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.602774 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.602994 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.603059 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.603201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.617282 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.646680 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.648046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.651739 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.652276 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.652466 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.653074 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.653320 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.660116 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.670100 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.671829 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.680180 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.704966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705016 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705075 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705158 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705190 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705334 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.705360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.707276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.714183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.714673 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.716070 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.719038 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.719653 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.764355 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.767104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.786744 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.800128 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821703 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821759 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821915 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.821938 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.822092 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.828174 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.850859 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"cinder-db-create-qz6th\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.852740 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.858435 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"barbican-fa35-account-create-update-nlm4d\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.863055 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.877714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"barbican-db-create-4cxkf\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.881712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"keystone-db-sync-jjm2k\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.899742 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.924734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.935029 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.936924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.973207 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.973968 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.975459 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.978402 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982595 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982628 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982660 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982692 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.982717 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.983145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:30 crc kubenswrapper[4907]: I0127 18:27:30.983277 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.000405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.020742 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084086 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084219 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084234 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084293 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.084322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.085680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.086274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.086856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.087381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.109296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"heat-f55d-account-create-update-gfk7k\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.114961 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"neutron-8259-account-create-update-b45js\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.115503 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"cinder-4f95-account-create-update-s69m5\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.139305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"neutron-db-create-lpvwr\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.154371 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.281160 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.332776 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.346283 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.359983 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.361523 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.365578 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.394110 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.406967 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.499671 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.500018 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.603303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.603368 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.604326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.760027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.769373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"root-account-create-update-fxqjb\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.961935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:27:31 crc kubenswrapper[4907]: I0127 18:27:31.997821 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.029046 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.067917 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.106799 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.225807 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.336984 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.353731 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerStarted","Data":"1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.357148 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerStarted","Data":"7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.359351 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerStarted","Data":"68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.359390 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerStarted","Data":"89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.381644 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.383892 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerStarted","Data":"e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.385744 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerStarted","Data":"85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.391419 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerStarted","Data":"af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.393687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerStarted","Data":"c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9"} Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.411099 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-jsvqc" podStartSLOduration=2.411082672 podStartE2EDuration="2.411082672s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:32.37854027 +0000 UTC m=+1307.507822882" watchObservedRunningTime="2026-01-27 18:27:32.411082672 +0000 UTC m=+1307.540365274" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.416322 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.610529 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qz6th" podStartSLOduration=2.6105141549999997 podStartE2EDuration="2.610514155s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:32.455830074 +0000 UTC m=+1307.585112676" watchObservedRunningTime="2026-01-27 18:27:32.610514155 +0000 UTC m=+1307.739796767" Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.615785 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:27:32 crc kubenswrapper[4907]: I0127 18:27:32.738574 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:27:33 crc kubenswrapper[4907]: E0127 18:27:33.323333 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5cca69_8afc_417f_9f22_93c279730bf7.slice/crio-conmon-63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.411847 4907 generic.go:334] "Generic (PLEG): container finished" podID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerID="6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.411968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerDied","Data":"6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.415011 4907 generic.go:334] "Generic (PLEG): container finished" podID="85c8faae-95fb-4533-b45c-51e91bb95947" containerID="68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.415100 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerDied","Data":"68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.417687 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerStarted","Data":"948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.417733 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerStarted","Data":"0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.420097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerStarted","Data":"3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.420154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerStarted","Data":"1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.422182 4907 generic.go:334] "Generic (PLEG): container finished" podID="701eaff9-db27-4bff-975c-b8ebf034725f" containerID="f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.422229 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerDied","Data":"f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.424232 4907 generic.go:334] "Generic (PLEG): container finished" podID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerID="63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.424309 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerDied","Data":"63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.432716 4907 generic.go:334] "Generic (PLEG): container finished" podID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerID="902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.432781 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerDied","Data":"902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.434904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerStarted","Data":"3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.434925 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerStarted","Data":"27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.438530 4907 generic.go:334] "Generic (PLEG): container finished" podID="32b7a898-5d57-496a-8ad1-380b636e3629" containerID="40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4" exitCode=0 Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.439534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerDied","Data":"40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4"} Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.472984 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-f55d-account-create-update-gfk7k" podStartSLOduration=3.472960522 podStartE2EDuration="3.472960522s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.461199626 +0000 UTC m=+1308.590482238" watchObservedRunningTime="2026-01-27 18:27:33.472960522 +0000 UTC m=+1308.602243154" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.485679 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lpvwr" podStartSLOduration=3.485661696 podStartE2EDuration="3.485661696s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.476852384 +0000 UTC m=+1308.606134996" watchObservedRunningTime="2026-01-27 18:27:33.485661696 +0000 UTC m=+1308.614944308" Jan 27 18:27:33 crc kubenswrapper[4907]: I0127 18:27:33.570094 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fxqjb" podStartSLOduration=2.570077205 podStartE2EDuration="2.570077205s" podCreationTimestamp="2026-01-27 18:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:33.565008749 +0000 UTC m=+1308.694291361" watchObservedRunningTime="2026-01-27 18:27:33.570077205 +0000 UTC m=+1308.699359817" Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.448064 4907 generic.go:334] "Generic (PLEG): container finished" podID="b54f9573-0bd6-4133-872a-b9e73129d654" containerID="948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.448129 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerDied","Data":"948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45"} Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.451066 4907 generic.go:334] "Generic (PLEG): container finished" podID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerID="3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.451368 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerDied","Data":"3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944"} Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.453196 4907 generic.go:334] "Generic (PLEG): container finished" podID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerID="3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661" exitCode=0 Jan 27 18:27:34 crc kubenswrapper[4907]: I0127 18:27:34.453256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerDied","Data":"3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.331607 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.340916 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.358366 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.365233 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.377013 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381138 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") pod \"701eaff9-db27-4bff-975c-b8ebf034725f\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381358 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") pod \"7844ef4e-92dd-4ea6-a792-b255290ef833\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381417 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") pod \"32b7a898-5d57-496a-8ad1-380b636e3629\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381496 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") pod \"32b7a898-5d57-496a-8ad1-380b636e3629\" (UID: \"32b7a898-5d57-496a-8ad1-380b636e3629\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381527 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") pod \"85c8faae-95fb-4533-b45c-51e91bb95947\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381576 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") pod \"7844ef4e-92dd-4ea6-a792-b255290ef833\" (UID: \"7844ef4e-92dd-4ea6-a792-b255290ef833\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381618 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") pod \"85c8faae-95fb-4533-b45c-51e91bb95947\" (UID: \"85c8faae-95fb-4533-b45c-51e91bb95947\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.381681 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") pod \"701eaff9-db27-4bff-975c-b8ebf034725f\" (UID: \"701eaff9-db27-4bff-975c-b8ebf034725f\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.395854 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7844ef4e-92dd-4ea6-a792-b255290ef833" (UID: "7844ef4e-92dd-4ea6-a792-b255290ef833"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.395969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32b7a898-5d57-496a-8ad1-380b636e3629" (UID: "32b7a898-5d57-496a-8ad1-380b636e3629"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.396465 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85c8faae-95fb-4533-b45c-51e91bb95947" (UID: "85c8faae-95fb-4533-b45c-51e91bb95947"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.397687 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "701eaff9-db27-4bff-975c-b8ebf034725f" (UID: "701eaff9-db27-4bff-975c-b8ebf034725f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.406051 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.465263 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn" (OuterVolumeSpecName: "kube-api-access-m4hpn") pod "701eaff9-db27-4bff-975c-b8ebf034725f" (UID: "701eaff9-db27-4bff-975c-b8ebf034725f"). InnerVolumeSpecName "kube-api-access-m4hpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.466387 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx" (OuterVolumeSpecName: "kube-api-access-cwkrx") pod "32b7a898-5d57-496a-8ad1-380b636e3629" (UID: "32b7a898-5d57-496a-8ad1-380b636e3629"). InnerVolumeSpecName "kube-api-access-cwkrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.478648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t" (OuterVolumeSpecName: "kube-api-access-flr2t") pod "85c8faae-95fb-4533-b45c-51e91bb95947" (UID: "85c8faae-95fb-4533-b45c-51e91bb95947"). InnerVolumeSpecName "kube-api-access-flr2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484670 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") pod \"c3998964-67eb-4adb-912d-a6367ae3beaf\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") pod \"c3998964-67eb-4adb-912d-a6367ae3beaf\" (UID: \"c3998964-67eb-4adb-912d-a6367ae3beaf\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") pod \"ac5cca69-8afc-417f-9f22-93c279730bf7\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.484925 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") pod \"ac5cca69-8afc-417f-9f22-93c279730bf7\" (UID: \"ac5cca69-8afc-417f-9f22-93c279730bf7\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.485091 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3998964-67eb-4adb-912d-a6367ae3beaf" (UID: "c3998964-67eb-4adb-912d-a6367ae3beaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.486755 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac5cca69-8afc-417f-9f22-93c279730bf7" (UID: "ac5cca69-8afc-417f-9f22-93c279730bf7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.488334 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf" (OuterVolumeSpecName: "kube-api-access-28rdf") pod "ac5cca69-8afc-417f-9f22-93c279730bf7" (UID: "ac5cca69-8afc-417f-9f22-93c279730bf7"). InnerVolumeSpecName "kube-api-access-28rdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.488549 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw" (OuterVolumeSpecName: "kube-api-access-stqxw") pod "c3998964-67eb-4adb-912d-a6367ae3beaf" (UID: "c3998964-67eb-4adb-912d-a6367ae3beaf"). InnerVolumeSpecName "kube-api-access-stqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489541 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rdf\" (UniqueName: \"kubernetes.io/projected/ac5cca69-8afc-417f-9f22-93c279730bf7-kube-api-access-28rdf\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489580 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac5cca69-8afc-417f-9f22-93c279730bf7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489592 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkrx\" (UniqueName: \"kubernetes.io/projected/32b7a898-5d57-496a-8ad1-380b636e3629-kube-api-access-cwkrx\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489603 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flr2t\" (UniqueName: \"kubernetes.io/projected/85c8faae-95fb-4533-b45c-51e91bb95947-kube-api-access-flr2t\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489613 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7844ef4e-92dd-4ea6-a792-b255290ef833-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489623 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c8faae-95fb-4533-b45c-51e91bb95947-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489635 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hpn\" (UniqueName: \"kubernetes.io/projected/701eaff9-db27-4bff-975c-b8ebf034725f-kube-api-access-m4hpn\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489646 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701eaff9-db27-4bff-975c-b8ebf034725f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489656 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3998964-67eb-4adb-912d-a6367ae3beaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489665 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stqxw\" (UniqueName: \"kubernetes.io/projected/c3998964-67eb-4adb-912d-a6367ae3beaf-kube-api-access-stqxw\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.489700 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b7a898-5d57-496a-8ad1-380b636e3629-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f95-account-create-update-s69m5" event={"ID":"c3998964-67eb-4adb-912d-a6367ae3beaf","Type":"ContainerDied","Data":"1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494819 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db6cbf55cb0c25603a83b72a7e59442a34fdae25117fdae494efe5194fbe9f9" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.494930 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f95-account-create-update-s69m5" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.498397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l" (OuterVolumeSpecName: "kube-api-access-6w54l") pod "7844ef4e-92dd-4ea6-a792-b255290ef833" (UID: "7844ef4e-92dd-4ea6-a792-b255290ef833"). InnerVolumeSpecName "kube-api-access-6w54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500611 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4cxkf" event={"ID":"32b7a898-5d57-496a-8ad1-380b636e3629","Type":"ContainerDied","Data":"7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500713 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebdc1e4356eb53f8529b577c97b97d9933e20814623f532b4bb4b915edac61b" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.500813 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4cxkf" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.514413 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerID="f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643" exitCode=0 Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.514521 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerDied","Data":"f67b424af53cfdd5ce878a8022111e895a8cf9cfb3040f9293498a59c3639643"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528351 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxqjb" event={"ID":"421865e2-2878-4bc4-9480-7afb5e7133fd","Type":"ContainerDied","Data":"27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.528888 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b355db7d5cedc8181a02fb361426451c203f9d2ad81fdff895eeef37f20409" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.545970 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa35-account-create-update-nlm4d" event={"ID":"701eaff9-db27-4bff-975c-b8ebf034725f","Type":"ContainerDied","Data":"85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.546007 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85da0aa90c973bde462e574429252127836f411550c3f80df423e234745c0a3a" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.546087 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa35-account-create-update-nlm4d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.547322 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.548760 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.554592 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8259-account-create-update-b45js" event={"ID":"7844ef4e-92dd-4ea6-a792-b255290ef833","Type":"ContainerDied","Data":"c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.555185 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4eb5493aee2cfc49b3a7bf9c6fd1fb99524c4b2093bd9c451b7385b9a2478b9" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.554876 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8259-account-create-update-b45js" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565117 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-jsvqc" event={"ID":"85c8faae-95fb-4533-b45c-51e91bb95947","Type":"ContainerDied","Data":"89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89855652aa489b1f329775a7995f090e17a9598a85b1756c65afd75b61a49760" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.565346 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-jsvqc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.596307 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w54l\" (UniqueName: \"kubernetes.io/projected/7844ef4e-92dd-4ea6-a792-b255290ef833-kube-api-access-6w54l\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.600970 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpvwr" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.601072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpvwr" event={"ID":"b54f9573-0bd6-4133-872a-b9e73129d654","Type":"ContainerDied","Data":"0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.601095 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0560c14db158d64bce6c26ea29bec4d9fb4fa1f39a3d1f43370dee25cb6a56dc" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.611963 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f55d-account-create-update-gfk7k" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.612455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f55d-account-create-update-gfk7k" event={"ID":"d206e054-cdc8-4a59-9de8-93bfeae80700","Type":"ContainerDied","Data":"1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.612491 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7ee34ddedd08cc57a130c57125c95e25259fc35a2121708ce9c43860c03f2d" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qz6th" event={"ID":"ac5cca69-8afc-417f-9f22-93c279730bf7","Type":"ContainerDied","Data":"af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3"} Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632166 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1bc617ee948a447cd82ae616ac3ed3b9176830827dbc589579b48c36bb85c3" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.632256 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qz6th" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700655 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") pod \"d206e054-cdc8-4a59-9de8-93bfeae80700\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700708 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") pod \"b54f9573-0bd6-4133-872a-b9e73129d654\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") pod \"421865e2-2878-4bc4-9480-7afb5e7133fd\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700792 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") pod \"421865e2-2878-4bc4-9480-7afb5e7133fd\" (UID: \"421865e2-2878-4bc4-9480-7afb5e7133fd\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700846 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") pod \"d206e054-cdc8-4a59-9de8-93bfeae80700\" (UID: \"d206e054-cdc8-4a59-9de8-93bfeae80700\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.700867 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") pod \"b54f9573-0bd6-4133-872a-b9e73129d654\" (UID: \"b54f9573-0bd6-4133-872a-b9e73129d654\") " Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.702184 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "421865e2-2878-4bc4-9480-7afb5e7133fd" (UID: "421865e2-2878-4bc4-9480-7afb5e7133fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704008 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jjm2k" podStartSLOduration=2.671313185 podStartE2EDuration="8.703990677s" podCreationTimestamp="2026-01-27 18:27:30 +0000 UTC" firstStartedPulling="2026-01-27 18:27:32.106604039 +0000 UTC m=+1307.235886651" lastFinishedPulling="2026-01-27 18:27:38.139281531 +0000 UTC m=+1313.268564143" observedRunningTime="2026-01-27 18:27:38.695479993 +0000 UTC m=+1313.824762605" watchObservedRunningTime="2026-01-27 18:27:38.703990677 +0000 UTC m=+1313.833273289" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d206e054-cdc8-4a59-9de8-93bfeae80700" (UID: "d206e054-cdc8-4a59-9de8-93bfeae80700"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.704728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt" (OuterVolumeSpecName: "kube-api-access-pwfbt") pod "421865e2-2878-4bc4-9480-7afb5e7133fd" (UID: "421865e2-2878-4bc4-9480-7afb5e7133fd"). InnerVolumeSpecName "kube-api-access-pwfbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.705390 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2" (OuterVolumeSpecName: "kube-api-access-wmgn2") pod "d206e054-cdc8-4a59-9de8-93bfeae80700" (UID: "d206e054-cdc8-4a59-9de8-93bfeae80700"). InnerVolumeSpecName "kube-api-access-wmgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.707975 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b54f9573-0bd6-4133-872a-b9e73129d654" (UID: "b54f9573-0bd6-4133-872a-b9e73129d654"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.713870 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599" (OuterVolumeSpecName: "kube-api-access-w5599") pod "b54f9573-0bd6-4133-872a-b9e73129d654" (UID: "b54f9573-0bd6-4133-872a-b9e73129d654"). InnerVolumeSpecName "kube-api-access-w5599". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgn2\" (UniqueName: \"kubernetes.io/projected/d206e054-cdc8-4a59-9de8-93bfeae80700-kube-api-access-wmgn2\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802911 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5599\" (UniqueName: \"kubernetes.io/projected/b54f9573-0bd6-4133-872a-b9e73129d654-kube-api-access-w5599\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802922 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d206e054-cdc8-4a59-9de8-93bfeae80700-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802971 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54f9573-0bd6-4133-872a-b9e73129d654-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802983 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfbt\" (UniqueName: \"kubernetes.io/projected/421865e2-2878-4bc4-9480-7afb5e7133fd-kube-api-access-pwfbt\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:38 crc kubenswrapper[4907]: I0127 18:27:38.802992 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/421865e2-2878-4bc4-9480-7afb5e7133fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.644924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerStarted","Data":"fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9"} Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.649899 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxqjb" Jan 27 18:27:39 crc kubenswrapper[4907]: I0127 18:27:39.650222 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"a57186f91849ce1a4ed109cfc013b416cbe04dcb095c0b6377b4eab7974c4186"} Jan 27 18:27:41 crc kubenswrapper[4907]: I0127 18:27:41.674605 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerStarted","Data":"8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc"} Jan 27 18:27:41 crc kubenswrapper[4907]: I0127 18:27:41.714341 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-d856z" podStartSLOduration=5.506933106 podStartE2EDuration="38.714320936s" podCreationTimestamp="2026-01-27 18:27:03 +0000 UTC" firstStartedPulling="2026-01-27 18:27:06.972584681 +0000 UTC m=+1282.101867293" lastFinishedPulling="2026-01-27 18:27:40.179972511 +0000 UTC m=+1315.309255123" observedRunningTime="2026-01-27 18:27:41.705216165 +0000 UTC m=+1316.834498777" watchObservedRunningTime="2026-01-27 18:27:41.714320936 +0000 UTC m=+1316.843603548" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.687180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"1dea172c6d28f2691392445aa77bfbba4308ce3ace2c0e3b5d9e46722da81c85"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.687621 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c9228204-5d32-47ea-9236-8ae3e4d5eebc","Type":"ContainerStarted","Data":"21b343aff3cb89a3b9711953b78bf2cea639d421de01b2b882ae43fd82fcf8bc"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.691233 4907 generic.go:334] "Generic (PLEG): container finished" podID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerID="fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9" exitCode=0 Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.691282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerDied","Data":"fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9"} Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.736934 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.736990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.743302 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.743274892 podStartE2EDuration="15.743274892s" podCreationTimestamp="2026-01-27 18:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:42.730520837 +0000 UTC m=+1317.859803499" watchObservedRunningTime="2026-01-27 18:27:42.743274892 +0000 UTC m=+1317.872557514" Jan 27 18:27:42 crc kubenswrapper[4907]: I0127 18:27:42.746733 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:43 crc kubenswrapper[4907]: I0127 18:27:43.704951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.235433 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362831 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.362975 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") pod \"2dd8fea0-24a6-4212-875a-5cf95105f549\" (UID: \"2dd8fea0-24a6-4212-875a-5cf95105f549\") " Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.382830 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v" (OuterVolumeSpecName: "kube-api-access-qwn5v") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "kube-api-access-qwn5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.465265 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwn5v\" (UniqueName: \"kubernetes.io/projected/2dd8fea0-24a6-4212-875a-5cf95105f549-kube-api-access-qwn5v\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.477856 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data" (OuterVolumeSpecName: "config-data") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.482886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd8fea0-24a6-4212-875a-5cf95105f549" (UID: "2dd8fea0-24a6-4212-875a-5cf95105f549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.566692 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.566716 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd8fea0-24a6-4212-875a-5cf95105f549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712157 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjm2k" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712144 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjm2k" event={"ID":"2dd8fea0-24a6-4212-875a-5cf95105f549","Type":"ContainerDied","Data":"e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95"} Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.712283 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14480432b2735c1df3bf974cf8b36e1846853ba2905c702511a67ffaa4ead95" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.938975 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945616 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945645 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945663 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945670 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945688 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945694 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945705 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945710 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945718 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945725 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945738 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945744 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945760 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945765 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945777 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945783 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945791 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945796 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: E0127 18:27:44.945811 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.945816 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946077 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946088 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946095 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" containerName="keystone-db-sync" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946106 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946116 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946129 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946137 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946145 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946344 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" containerName="mariadb-database-create" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.946353 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" containerName="mariadb-account-create-update" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.947600 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.971457 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:44 crc kubenswrapper[4907]: I0127 18:27:44.999528 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.001318 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006276 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006629 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.006754 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.007009 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.007109 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.026340 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080458 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080526 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080565 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080677 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.080997 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.081119 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.081217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.132899 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.134172 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.143300 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-865rb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.144027 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.146012 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.182944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183270 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183449 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183506 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183580 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183608 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183648 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.183776 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.184169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.194180 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.195127 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.197674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.209375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.209653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.211106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.212710 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.221201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"dnsmasq-dns-6f8c45789f-msln7\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.254307 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.255608 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.255891 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"keystone-bootstrap-nsfdn\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257755 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.257973 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.270355 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.282683 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.284610 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.287757 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.287999 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.288175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294455 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294520 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.294679 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.296094 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.326536 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.337930 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.367339 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.368799 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.373877 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.374111 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.374480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.389335 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.394832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400123 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400231 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400498 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400596 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400658 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400748 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.400813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.401519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.401713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.413862 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.419615 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.422243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.441464 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"heat-db-sync-rl9vb\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.446326 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.501347 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.516976 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517068 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517125 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517308 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517419 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517480 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517589 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.517710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.522533 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.537130 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.540149 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.547217 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.552973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.554131 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.554572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.557128 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.563175 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"neutron-db-sync-8p796\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.565443 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"cinder-db-sync-kbngs\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.586226 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.592632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.604108 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619465 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619549 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619586 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619655 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.619746 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.620630 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.624828 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627087 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627858 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.627960 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.628225 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.659593 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"barbican-db-sync-x9tl4\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.675399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"placement-db-sync-6gppf\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.708842 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.716011 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723268 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723568 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723902 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.723977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.726790 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.727473 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.731218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.733734 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.742205 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.742433 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.743924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.760906 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.767643 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.808139 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.814826 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.828975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829124 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.829855 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.830080 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.830120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831056 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831328 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.831643 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.832162 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.832725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.834953 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.864804 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"dnsmasq-dns-fcfdd6f9f-55flw\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.932927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933298 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933350 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.933391 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.935151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936357 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936733 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.936785 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.938314 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.938808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.956542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.961822 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.966730 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.968760 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:45 crc kubenswrapper[4907]: I0127 18:27:45.976289 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"ceilometer-0\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " pod="openstack/ceilometer-0" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.062669 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.087870 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.282828 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.329025 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.762691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerStarted","Data":"9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59"} Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.768963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerStarted","Data":"ef82f5a51c0b5b446b1227793ccf32b81c18c3d762371e596ecaa582f532aa19"} Jan 27 18:27:46 crc kubenswrapper[4907]: I0127 18:27:46.782090 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.091658 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.114568 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:27:47 crc kubenswrapper[4907]: W0127 18:27:47.126038 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e539a06_3352_4163_a259_6fd53182fe02.slice/crio-41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2 WatchSource:0}: Error finding container 41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2: Status 404 returned error can't find the container with id 41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2 Jan 27 18:27:47 crc kubenswrapper[4907]: W0127 18:27:47.138172 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde23a4c9_a62e_4523_8480_b19f3f10f586.slice/crio-58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b WatchSource:0}: Error finding container 58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b: Status 404 returned error can't find the container with id 58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.183032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.191874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.203287 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.299969 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.795152 4907 generic.go:334] "Generic (PLEG): container finished" podID="fa089541-ea04-43f9-b816-fcde07a28a99" containerID="386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3" exitCode=0 Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.795446 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerDied","Data":"386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.807034 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerStarted","Data":"7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.807072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerStarted","Data":"41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.826839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerStarted","Data":"58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829111 4907 generic.go:334] "Generic (PLEG): container finished" podID="89f27191-1460-4103-a832-acf1b0a8eca1" containerID="9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7" exitCode=0 Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829169 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.829197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerStarted","Data":"9f9319897192a3481febde0075f2336b2935d057d9a13f6811945fb00c33176e"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.836916 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerStarted","Data":"fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.843224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerStarted","Data":"840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.863769 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8p796" podStartSLOduration=2.863747241 podStartE2EDuration="2.863747241s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:47.842838202 +0000 UTC m=+1322.972120824" watchObservedRunningTime="2026-01-27 18:27:47.863747241 +0000 UTC m=+1322.993029853" Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.881119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerStarted","Data":"9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.897793 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"f21eb03c891c1ca372e748b6131d8a4413d1eb5b66cad03fa9fdb685e87a089a"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.922142 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerStarted","Data":"f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7"} Jan 27 18:27:47 crc kubenswrapper[4907]: I0127 18:27:47.994874 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nsfdn" podStartSLOduration=3.994856737 podStartE2EDuration="3.994856737s" podCreationTimestamp="2026-01-27 18:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:47.944033881 +0000 UTC m=+1323.073316493" watchObservedRunningTime="2026-01-27 18:27:47.994856737 +0000 UTC m=+1323.124139349" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.197074 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.378929 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533340 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533705 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533797 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533894 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.533948 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.534001 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") pod \"fa089541-ea04-43f9-b816-fcde07a28a99\" (UID: \"fa089541-ea04-43f9-b816-fcde07a28a99\") " Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.558381 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g" (OuterVolumeSpecName: "kube-api-access-8zh5g") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "kube-api-access-8zh5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.599033 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.627339 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641231 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641259 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.641270 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zh5g\" (UniqueName: \"kubernetes.io/projected/fa089541-ea04-43f9-b816-fcde07a28a99-kube-api-access-8zh5g\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.643648 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.649806 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.693365 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config" (OuterVolumeSpecName: "config") pod "fa089541-ea04-43f9-b816-fcde07a28a99" (UID: "fa089541-ea04-43f9-b816-fcde07a28a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743302 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743341 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.743350 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa089541-ea04-43f9-b816-fcde07a28a99-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.966099 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerStarted","Data":"f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a"} Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.966188 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.987947 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podStartSLOduration=3.9879313659999998 podStartE2EDuration="3.987931366s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:27:48.980994098 +0000 UTC m=+1324.110276720" watchObservedRunningTime="2026-01-27 18:27:48.987931366 +0000 UTC m=+1324.117213978" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.989835 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.990177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-msln7" event={"ID":"fa089541-ea04-43f9-b816-fcde07a28a99","Type":"ContainerDied","Data":"ef82f5a51c0b5b446b1227793ccf32b81c18c3d762371e596ecaa582f532aa19"} Jan 27 18:27:48 crc kubenswrapper[4907]: I0127 18:27:48.990214 4907 scope.go:117] "RemoveContainer" containerID="386525da4db0bca8e45a25c6984551110444cfdeefd94e7fedc20ede8867f3b3" Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.119067 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.164289 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-msln7"] Jan 27 18:27:49 crc kubenswrapper[4907]: I0127 18:27:49.765258 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" path="/var/lib/kubelet/pods/fa089541-ea04-43f9-b816-fcde07a28a99/volumes" Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.060600 4907 generic.go:334] "Generic (PLEG): container finished" podID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerID="9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd" exitCode=0 Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.060677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerDied","Data":"9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd"} Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.064368 4907 generic.go:334] "Generic (PLEG): container finished" podID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerID="8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc" exitCode=0 Jan 27 18:27:53 crc kubenswrapper[4907]: I0127 18:27:53.064414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerDied","Data":"8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc"} Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.553913 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.567774 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.719981 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720122 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720199 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720224 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720248 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720287 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720308 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720366 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") pod \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\" (UID: \"1e2cf5dd-be65-4237-b77e-9bcc84cd26de\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.720423 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") pod \"2e9b8697-5a15-4a0d-aab8-702699520f6a\" (UID: \"2e9b8697-5a15-4a0d-aab8-702699520f6a\") " Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728284 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728498 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz" (OuterVolumeSpecName: "kube-api-access-5d9vz") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "kube-api-access-5d9vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.728513 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts" (OuterVolumeSpecName: "scripts") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.729897 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.730623 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92" (OuterVolumeSpecName: "kube-api-access-dlc92") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "kube-api-access-dlc92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.736804 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.757812 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.757919 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data" (OuterVolumeSpecName: "config-data") pod "2e9b8697-5a15-4a0d-aab8-702699520f6a" (UID: "2e9b8697-5a15-4a0d-aab8-702699520f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.774266 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.791427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data" (OuterVolumeSpecName: "config-data") pod "1e2cf5dd-be65-4237-b77e-9bcc84cd26de" (UID: "1e2cf5dd-be65-4237-b77e-9bcc84cd26de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822492 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822527 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822540 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822569 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822578 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlc92\" (UniqueName: \"kubernetes.io/projected/2e9b8697-5a15-4a0d-aab8-702699520f6a-kube-api-access-dlc92\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822588 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822597 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822606 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822616 4907 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9b8697-5a15-4a0d-aab8-702699520f6a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.822625 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9vz\" (UniqueName: \"kubernetes.io/projected/1e2cf5dd-be65-4237-b77e-9bcc84cd26de-kube-api-access-5d9vz\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:55 crc kubenswrapper[4907]: I0127 18:27:55.938193 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.016487 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.016739 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" containerID="cri-o://3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" gracePeriod=10 Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103675 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d856z" event={"ID":"1e2cf5dd-be65-4237-b77e-9bcc84cd26de","Type":"ContainerDied","Data":"f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b"} Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103722 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cc1f0ce5bf31e11bf06cb4a3cfff9c67b4b50d00c412bc683dc02e6e6d175b" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.103812 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d856z" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsfdn" event={"ID":"2e9b8697-5a15-4a0d-aab8-702699520f6a","Type":"ContainerDied","Data":"9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59"} Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110099 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d667868c21ba4bb6a7e3bb1330d96e385bb46248bfd87a6db756b37f813ef59" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.110104 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsfdn" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.521950 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.521998 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.638084 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.647323 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nsfdn"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.735334 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.735978 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.735999 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.736024 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736031 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: E0127 18:27:56.736050 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736056 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736235 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" containerName="glance-db-sync" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736255 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa089541-ea04-43f9-b816-fcde07a28a99" containerName="init" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.736268 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" containerName="keystone-bootstrap" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.738968 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752077 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752126 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752227 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752476 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.752650 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.762522 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.848928 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.848983 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849216 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.849329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.955905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956387 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956446 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.956513 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.972500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.973064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.978262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.982665 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.987873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:56 crc kubenswrapper[4907]: I0127 18:27:56.999844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"keystone-bootstrap-px4wp\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.090589 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.154447 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.156184 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.178840 4907 generic.go:334] "Generic (PLEG): container finished" podID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerID="3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" exitCode=0 Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.178882 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7"} Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.187566 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.274944 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275048 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275099 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275210 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.275315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.377951 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378041 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378188 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378307 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.378373 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.379091 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.379238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.380095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.380796 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.381547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.404308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"dnsmasq-dns-57c957c4ff-pp6pz\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.503646 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.764602 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9b8697-5a15-4a0d-aab8-702699520f6a" path="/var/lib/kubelet/pods/2e9b8697-5a15-4a0d-aab8-702699520f6a/volumes" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.982323 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.985046 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988310 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988628 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gmz5" Jan 27 18:27:57 crc kubenswrapper[4907]: I0127 18:27:57.988778 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.033142 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111239 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111612 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111813 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.111949 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112232 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.112354 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.214829 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215116 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215245 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215255 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215751 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.215854 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.216372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.220420 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.224673 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.242378 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.242388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.244539 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.245300 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.372323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.407743 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.467992 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.475587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.488725 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.603743 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636480 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636601 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636641 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.636926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739519 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739597 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739823 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739866 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.739895 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740031 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.740104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.744771 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.745329 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.745368 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.758411 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.759327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.766611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.802661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:27:58 crc kubenswrapper[4907]: I0127 18:27:58.893613 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.537292 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.626249 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:00 crc kubenswrapper[4907]: I0127 18:28:00.766898 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Jan 27 18:28:05 crc kubenswrapper[4907]: I0127 18:28:05.766987 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.136491 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.136755 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch8dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-6gppf_openstack(de23a4c9-a62e-4523-8480-b19f3f10f586): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.137989 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-6gppf" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" Jan 27 18:28:06 crc kubenswrapper[4907]: E0127 18:28:06.280187 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-6gppf" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.345230 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.346259 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmb6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x9tl4_openstack(3d3838ba-a929-4aab-a58d-dd4f39628f00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:09 crc kubenswrapper[4907]: E0127 18:28:09.348528 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x9tl4" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" Jan 27 18:28:10 crc kubenswrapper[4907]: E0127 18:28:10.327107 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-x9tl4" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" Jan 27 18:28:14 crc kubenswrapper[4907]: I0127 18:28:14.368569 4907 generic.go:334] "Generic (PLEG): container finished" podID="9e539a06-3352-4163-a259-6fd53182fe02" containerID="7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c" exitCode=0 Jan 27 18:28:14 crc kubenswrapper[4907]: I0127 18:28:14.368662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerDied","Data":"7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c"} Jan 27 18:28:15 crc kubenswrapper[4907]: I0127 18:28:15.767610 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Jan 27 18:28:15 crc kubenswrapper[4907]: I0127 18:28:15.768160 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.014507 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.015197 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk57z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rl9vb_openstack(90ffb508-65d2-4c20-95db-209a1c9a3399): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.016405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rl9vb" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.173689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.197969 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314333 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314446 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314513 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314542 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314684 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") pod \"9e539a06-3352-4163-a259-6fd53182fe02\" (UID: \"9e539a06-3352-4163-a259-6fd53182fe02\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314717 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.314845 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") pod \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\" (UID: \"519051cc-696d-4d4b-9dc1-a23d7689e7fc\") " Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.320589 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426" (OuterVolumeSpecName: "kube-api-access-jv426") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "kube-api-access-jv426". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.322480 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm" (OuterVolumeSpecName: "kube-api-access-pcnzm") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "kube-api-access-pcnzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.376182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.376475 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config" (OuterVolumeSpecName: "config") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.378579 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config" (OuterVolumeSpecName: "config") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.378881 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e539a06-3352-4163-a259-6fd53182fe02" (UID: "9e539a06-3352-4163-a259-6fd53182fe02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.390485 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.390741 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.391162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "519051cc-696d-4d4b-9dc1-a23d7689e7fc" (UID: "519051cc-696d-4d4b-9dc1-a23d7689e7fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" event={"ID":"519051cc-696d-4d4b-9dc1-a23d7689e7fc","Type":"ContainerDied","Data":"781cf8fdf6e1c415cd3c6299b37cc29b7386bd8937f3de99345c272dc7aa5d4d"} Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414580 4907 scope.go:117] "RemoveContainer" containerID="3b4532a53aa1adfa4c375f69ce5746421a240b9dbb042e188f57fb06ae18aeb7" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.414516 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417325 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417377 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417392 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnzm\" (UniqueName: \"kubernetes.io/projected/519051cc-696d-4d4b-9dc1-a23d7689e7fc-kube-api-access-pcnzm\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417417 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417427 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417439 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv426\" (UniqueName: \"kubernetes.io/projected/9e539a06-3352-4163-a259-6fd53182fe02-kube-api-access-jv426\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417450 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9e539a06-3352-4163-a259-6fd53182fe02-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.417460 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519051cc-696d-4d4b-9dc1-a23d7689e7fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.418730 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8p796" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.419274 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8p796" event={"ID":"9e539a06-3352-4163-a259-6fd53182fe02","Type":"ContainerDied","Data":"41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2"} Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.419313 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41dd99a4f743c29467b85e2fdf13069b5ddff4cc50160f9d22ed62b297fc48e2" Jan 27 18:28:18 crc kubenswrapper[4907]: E0127 18:28:18.420259 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rl9vb" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.520097 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:28:18 crc kubenswrapper[4907]: I0127 18:28:18.531642 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4b5qw"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.550399 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.586859 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.587011 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r4jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kbngs_openstack(fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.588318 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kbngs" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.609570 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.610710 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.610815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.610892 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="init" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.610947 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="init" Jan 27 18:28:19 crc kubenswrapper[4907]: E0127 18:28:19.611035 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611092 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611379 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e539a06-3352-4163-a259-6fd53182fe02" containerName="neutron-db-sync" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.611459 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.612763 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.638209 4907 scope.go:117] "RemoveContainer" containerID="6fc8c049e4d9b2beab0ebc8103626eb7b4d1724e79a941fa43f1500bfda70d3e" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.642471 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.719443 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.723640 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.730847 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.731180 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.731361 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.732063 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752646 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752695 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752720 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752737 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752763 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752806 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752828 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752847 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.752891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.753032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.766685 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dnq7b" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.816133 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" path="/var/lib/kubelet/pods/519051cc-696d-4d4b-9dc1-a23d7689e7fc/volumes" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855512 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855591 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855619 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855641 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855752 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855825 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.855885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.859312 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.859455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.860327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.861028 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.861413 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.871992 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872356 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.872618 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.879266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"dnsmasq-dns-5ccc5c4795-knq4v\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:19 crc kubenswrapper[4907]: I0127 18:28:19.891081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"neutron-59cf67488d-dzx5l\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.019671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.101632 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.130925 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.447753 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.470457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7"} Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.516010 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"8945b01efe6f0eefa5770c937a9e0da1d1ed380ebf9c56f44c4665a8df76080a"} Jan 27 18:28:20 crc kubenswrapper[4907]: E0127 18:28:20.613185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kbngs" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.617241 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.653733 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.768499 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4b5qw" podUID="519051cc-696d-4d4b-9dc1-a23d7689e7fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Jan 27 18:28:20 crc kubenswrapper[4907]: I0127 18:28:20.976797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.169441 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:28:21 crc kubenswrapper[4907]: W0127 18:28:21.194231 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90a7953e_f884_40eb_a25f_356aefbc6b83.slice/crio-6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f WatchSource:0}: Error finding container 6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f: Status 404 returned error can't find the container with id 6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.587254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.588892 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerStarted","Data":"b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.588979 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerStarted","Data":"8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590720 4907 generic.go:334] "Generic (PLEG): container finished" podID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerID="28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c" exitCode=0 Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.590842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerStarted","Data":"078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.601504 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.606497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"1212c15afb7a7361957c7ebacb8cda44faf9003a09cf0d539ec9f77c3a803903"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616354 4907 generic.go:334] "Generic (PLEG): container finished" podID="27a50802-6236-4a03-8ada-607126ed0127" containerID="c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667" exitCode=0 Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616412 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerDied","Data":"c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.616448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerStarted","Data":"ffe96f6fb082d960e1eda4044ba984492c5310cb5c8d156c93612ff111fd1a09"} Jan 27 18:28:21 crc kubenswrapper[4907]: I0127 18:28:21.622704 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-px4wp" podStartSLOduration=25.622684387 podStartE2EDuration="25.622684387s" podCreationTimestamp="2026-01-27 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:21.614268896 +0000 UTC m=+1356.743551508" watchObservedRunningTime="2026-01-27 18:28:21.622684387 +0000 UTC m=+1356.751966999" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.438718 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520806 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520867 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.520952 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.521035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.521066 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") pod \"27a50802-6236-4a03-8ada-607126ed0127\" (UID: \"27a50802-6236-4a03-8ada-607126ed0127\") " Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.555943 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5" (OuterVolumeSpecName: "kube-api-access-kwhr5") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "kube-api-access-kwhr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.586711 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:22 crc kubenswrapper[4907]: E0127 18:28:22.587348 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.587365 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.593822 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a50802-6236-4a03-8ada-607126ed0127" containerName="init" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.595362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.602115 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.604618 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.605819 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623199 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.623492 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwhr5\" (UniqueName: \"kubernetes.io/projected/27a50802-6236-4a03-8ada-607126ed0127-kube-api-access-kwhr5\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.661904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.669840 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.670485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-pp6pz" event={"ID":"27a50802-6236-4a03-8ada-607126ed0127","Type":"ContainerDied","Data":"ffe96f6fb082d960e1eda4044ba984492c5310cb5c8d156c93612ff111fd1a09"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.670595 4907 scope.go:117] "RemoveContainer" containerID="c0f732ea1b0c17a98bc0ab655121edc08e373000ca801d81fb095f6028f7c667" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.673388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1"} Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726066 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726182 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726217 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726266 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.726337 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.734561 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.739041 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.769228 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.772769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.780110 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.784242 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.784532 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.785215 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.794243 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.799017 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"neutron-54487fdc5c-ktzbt\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.826419 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config" (OuterVolumeSpecName: "config") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.827981 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828010 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828024 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.828036 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.838104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27a50802-6236-4a03-8ada-607126ed0127" (UID: "27a50802-6236-4a03-8ada-607126ed0127"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:22 crc kubenswrapper[4907]: I0127 18:28:22.931116 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a50802-6236-4a03-8ada-607126ed0127-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.054104 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.150353 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.159177 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-pp6pz"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.722457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerStarted","Data":"86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.725058 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735065 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerStarted","Data":"e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735274 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" containerID="cri-o://42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" gracePeriod=30 Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.735649 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" containerID="cri-o://e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" gracePeriod=30 Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.756187 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podStartSLOduration=4.756168726 podStartE2EDuration="4.756168726s" podCreationTimestamp="2026-01-27 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.742800423 +0000 UTC m=+1358.872083055" watchObservedRunningTime="2026-01-27 18:28:23.756168726 +0000 UTC m=+1358.885451338" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.783730 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.783707105 podStartE2EDuration="26.783707105s" podCreationTimestamp="2026-01-27 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.778045113 +0000 UTC m=+1358.907327725" watchObservedRunningTime="2026-01-27 18:28:23.783707105 +0000 UTC m=+1358.912989717" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.787666 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a50802-6236-4a03-8ada-607126ed0127" path="/var/lib/kubelet/pods/27a50802-6236-4a03-8ada-607126ed0127/volumes" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerStarted","Data":"c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788257 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.788269 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerStarted","Data":"7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.796197 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1"} Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.831871 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.840209 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59cf67488d-dzx5l" podStartSLOduration=4.840187203 podStartE2EDuration="4.840187203s" podCreationTimestamp="2026-01-27 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:23.812700106 +0000 UTC m=+1358.941982718" watchObservedRunningTime="2026-01-27 18:28:23.840187203 +0000 UTC m=+1358.969469815" Jan 27 18:28:23 crc kubenswrapper[4907]: I0127 18:28:23.889125 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6gppf" podStartSLOduration=4.772589905 podStartE2EDuration="38.889104875s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.196036873 +0000 UTC m=+1322.325319485" lastFinishedPulling="2026-01-27 18:28:21.312551843 +0000 UTC m=+1356.441834455" observedRunningTime="2026-01-27 18:28:23.875821944 +0000 UTC m=+1359.005104556" watchObservedRunningTime="2026-01-27 18:28:23.889104875 +0000 UTC m=+1359.018387487" Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerStarted","Data":"1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812543 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" containerID="cri-o://2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" gracePeriod=30 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.812640 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" containerID="cri-o://1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" gracePeriod=30 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.817880 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.817933 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"8568e05f8defdc45da4b5e782f2fecb663f59f14a17d880cf92f38ca7f0d7c34"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824377 4907 generic.go:334] "Generic (PLEG): container finished" podID="298d34df-8e81-4086-a6fa-d234e71167af" containerID="e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" exitCode=0 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824414 4907 generic.go:334] "Generic (PLEG): container finished" podID="298d34df-8e81-4086-a6fa-d234e71167af" containerID="42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" exitCode=143 Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824454 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.824499 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df"} Jan 27 18:28:24 crc kubenswrapper[4907]: I0127 18:28:24.839002 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.838984586 podStartE2EDuration="28.838984586s" podCreationTimestamp="2026-01-27 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:24.831681967 +0000 UTC m=+1359.960964579" watchObservedRunningTime="2026-01-27 18:28:24.838984586 +0000 UTC m=+1359.968267198" Jan 27 18:28:25 crc kubenswrapper[4907]: E0127 18:28:25.235058 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod993b3392_f1fa_4172_b516_5a22b9d636ad.slice/crio-1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod993b3392_f1fa_4172_b516_5a22b9d636ad.slice/crio-conmon-1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.858957 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.859137 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerStarted","Data":"89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877314 4907 generic.go:334] "Generic (PLEG): container finished" podID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerID="1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" exitCode=0 Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877728 4907 generic.go:334] "Generic (PLEG): container finished" podID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerID="2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" exitCode=143 Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x9tl4" podStartSLOduration=2.811639729 podStartE2EDuration="40.877446945s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.197475434 +0000 UTC m=+1322.326758046" lastFinishedPulling="2026-01-27 18:28:25.26328265 +0000 UTC m=+1360.392565262" observedRunningTime="2026-01-27 18:28:25.873396359 +0000 UTC m=+1361.002679001" watchObservedRunningTime="2026-01-27 18:28:25.877446945 +0000 UTC m=+1361.006729557" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.877809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.889267 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerStarted","Data":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.889423 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893345 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893739 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"298d34df-8e81-4086-a6fa-d234e71167af","Type":"ContainerDied","Data":"8945b01efe6f0eefa5770c937a9e0da1d1ed380ebf9c56f44c4665a8df76080a"} Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.893777 4907 scope.go:117] "RemoveContainer" containerID="e5b457d4a54ab4750e1777f3083ad96955bb134b13d3d7627a38c619fbb2ce19" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.908933 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909110 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909206 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909291 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909518 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.909775 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") pod \"298d34df-8e81-4086-a6fa-d234e71167af\" (UID: \"298d34df-8e81-4086-a6fa-d234e71167af\") " Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.923742 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.934270 4907 scope.go:117] "RemoveContainer" containerID="42cbf1a77faa4227840e3904f343bae4f2bb8cbdf952f1375bc2ed60f6ee20df" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.941308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt" (OuterVolumeSpecName: "kube-api-access-rccqt") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "kube-api-access-rccqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.943603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs" (OuterVolumeSpecName: "logs") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.957744 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts" (OuterVolumeSpecName: "scripts") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.960989 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54487fdc5c-ktzbt" podStartSLOduration=3.960972638 podStartE2EDuration="3.960972638s" podCreationTimestamp="2026-01-27 18:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:25.947810921 +0000 UTC m=+1361.077093533" watchObservedRunningTime="2026-01-27 18:28:25.960972638 +0000 UTC m=+1361.090255250" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.983961 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (OuterVolumeSpecName: "glance") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:28:25 crc kubenswrapper[4907]: I0127 18:28:25.984817 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.009675 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data" (OuterVolumeSpecName: "config-data") pod "298d34df-8e81-4086-a6fa-d234e71167af" (UID: "298d34df-8e81-4086-a6fa-d234e71167af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012449 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rccqt\" (UniqueName: \"kubernetes.io/projected/298d34df-8e81-4086-a6fa-d234e71167af-kube-api-access-rccqt\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012565 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012638 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012657 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012673 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012687 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/298d34df-8e81-4086-a6fa-d234e71167af-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.012697 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/298d34df-8e81-4086-a6fa-d234e71167af-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.062615 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.062773 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8") on node "crc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.119310 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.252229 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.286685 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.313603 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: E0127 18:28:26.315558 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.315663 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: E0127 18:28:26.315743 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.315797 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.316034 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-log" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.316124 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="298d34df-8e81-4086-a6fa-d234e71167af" containerName="glance-httpd" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.320267 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.330636 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.331107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.344770 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.346823 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427203 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427369 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427449 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427466 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427700 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427734 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427779 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427830 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.427950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.428933 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs" (OuterVolumeSpecName: "logs") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.459864 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.478173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (OuterVolumeSpecName: "glance") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520664 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520706 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.520743 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.521428 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.521470 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" gracePeriod=600 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529319 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529359 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529524 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") pod \"993b3392-f1fa-4172-b516-5a22b9d636ad\" (UID: \"993b3392-f1fa-4172-b516-5a22b9d636ad\") " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529862 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529883 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529908 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529922 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.529967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530011 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530081 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530102 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" " Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.530113 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.532908 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.534240 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.535491 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.545851 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs" (OuterVolumeSpecName: "kube-api-access-fqhjs") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "kube-api-access-fqhjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.546311 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.546353 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.555765 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.555947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts" (OuterVolumeSpecName: "scripts") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.556266 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.564013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.569918 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.577232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.608263 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data" (OuterVolumeSpecName: "config-data") pod "993b3392-f1fa-4172-b516-5a22b9d636ad" (UID: "993b3392-f1fa-4172-b516-5a22b9d636ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.630296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632181 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632214 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/993b3392-f1fa-4172-b516-5a22b9d636ad-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632224 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqhjs\" (UniqueName: \"kubernetes.io/projected/993b3392-f1fa-4172-b516-5a22b9d636ad-kube-api-access-fqhjs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.632234 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b3392-f1fa-4172-b516-5a22b9d636ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.637138 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.637432 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1") on node "crc" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.671284 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.740045 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.916269 4907 generic.go:334] "Generic (PLEG): container finished" podID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerID="b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.916635 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerDied","Data":"b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.941851 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"993b3392-f1fa-4172-b516-5a22b9d636ad","Type":"ContainerDied","Data":"1212c15afb7a7361957c7ebacb8cda44faf9003a09cf0d539ec9f77c3a803903"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.941903 4907 scope.go:117] "RemoveContainer" containerID="1b82ba66c61a00f4c3e8fa7143c7b3b8ebe19c013b5c50c429c39a9943129c7f" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.942009 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.963153 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31"} Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.963223 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.974006 4907 generic.go:334] "Generic (PLEG): container finished" podID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerID="c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9" exitCode=0 Jan 27 18:28:26 crc kubenswrapper[4907]: I0127 18:28:26.975108 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerDied","Data":"c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9"} Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.001433 4907 scope.go:117] "RemoveContainer" containerID="2a2f9078b6fe348e5bbc61fa3555c5e8d3734ebb02d49760cb05469c52dc7918" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.043378 4907 scope.go:117] "RemoveContainer" containerID="07545f0ac6e9596ef48552354e292c52ec4eabdd5bcbde6f20c6f81f90669809" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.047194 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.083633 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115317 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: E0127 18:28:27.115920 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115942 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: E0127 18:28:27.115964 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.115972 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.116226 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-log" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.116258 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" containerName="glance-httpd" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.118451 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.120898 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.122615 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.154617 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270298 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270313 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270400 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.270969 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.271127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.271158 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: W0127 18:28:27.318337 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ebee0c_64db_4384_9e27_95691ee28a17.slice/crio-f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13 WatchSource:0}: Error finding container f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13: Status 404 returned error can't find the container with id f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13 Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.327666 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.372950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373036 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373094 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373115 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373205 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373225 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373275 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.373292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.374261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.374373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.380156 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.380206 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.383965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.384155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.389982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.394639 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.394659 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.453970 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.753132 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.772029 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298d34df-8e81-4086-a6fa-d234e71167af" path="/var/lib/kubelet/pods/298d34df-8e81-4086-a6fa-d234e71167af/volumes" Jan 27 18:28:27 crc kubenswrapper[4907]: I0127 18:28:27.773148 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993b3392-f1fa-4172-b516-5a22b9d636ad" path="/var/lib/kubelet/pods/993b3392-f1fa-4172-b516-5a22b9d636ad/volumes" Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.003544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.016430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13"} Jan 27 18:28:28 crc kubenswrapper[4907]: I0127 18:28:28.414088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:28:29 crc kubenswrapper[4907]: I0127 18:28:29.052698 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4"} Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.020718 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.133099 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.133757 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" containerID="cri-o://f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" gracePeriod=10 Jan 27 18:28:30 crc kubenswrapper[4907]: I0127 18:28:30.937830 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.080906 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerID="89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2" exitCode=0 Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.081019 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerDied","Data":"89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2"} Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.083579 4907 generic.go:334] "Generic (PLEG): container finished" podID="89f27191-1460-4103-a832-acf1b0a8eca1" containerID="f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" exitCode=0 Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.083613 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a"} Jan 27 18:28:31 crc kubenswrapper[4907]: W0127 18:28:31.437750 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde WatchSource:0}: Error finding container a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde: Status 404 returned error can't find the container with id a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.750197 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.762793 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.880483 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881044 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881098 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881166 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881264 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881340 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881425 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881465 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") pod \"de23a4c9-a62e-4523-8480-b19f3f10f586\" (UID: \"de23a4c9-a62e-4523-8480-b19f3f10f586\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.881629 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") pod \"b745a073-e4cf-471d-92ce-ac5da568b38e\" (UID: \"b745a073-e4cf-471d-92ce-ac5da568b38e\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.884789 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs" (OuterVolumeSpecName: "logs") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.884951 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr" (OuterVolumeSpecName: "kube-api-access-phpwr") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "kube-api-access-phpwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.885232 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts" (OuterVolumeSpecName: "scripts") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.893635 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.896777 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm" (OuterVolumeSpecName: "kube-api-access-ch8dm") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "kube-api-access-ch8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.896973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.898337 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts" (OuterVolumeSpecName: "scripts") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.928420 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.941017 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.961739 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data" (OuterVolumeSpecName: "config-data") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.976526 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b745a073-e4cf-471d-92ce-ac5da568b38e" (UID: "b745a073-e4cf-471d-92ce-ac5da568b38e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.982735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data" (OuterVolumeSpecName: "config-data") pod "de23a4c9-a62e-4523-8480-b19f3f10f586" (UID: "de23a4c9-a62e-4523-8480-b19f3f10f586"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983595 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983727 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983778 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983897 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.983917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") pod \"89f27191-1460-4103-a832-acf1b0a8eca1\" (UID: \"89f27191-1460-4103-a832-acf1b0a8eca1\") " Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984348 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984359 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch8dm\" (UniqueName: \"kubernetes.io/projected/de23a4c9-a62e-4523-8480-b19f3f10f586-kube-api-access-ch8dm\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984370 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984378 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984386 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984393 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a4c9-a62e-4523-8480-b19f3f10f586-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984401 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de23a4c9-a62e-4523-8480-b19f3f10f586-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984408 4907 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984416 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984424 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phpwr\" (UniqueName: \"kubernetes.io/projected/b745a073-e4cf-471d-92ce-ac5da568b38e-kube-api-access-phpwr\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:31 crc kubenswrapper[4907]: I0127 18:28:31.984431 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b745a073-e4cf-471d-92ce-ac5da568b38e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.005290 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj" (OuterVolumeSpecName: "kube-api-access-c6vvj") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "kube-api-access-c6vvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.093070 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vvj\" (UniqueName: \"kubernetes.io/projected/89f27191-1460-4103-a832-acf1b0a8eca1-kube-api-access-c6vvj\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.113382 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config" (OuterVolumeSpecName: "config") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.144852 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" event={"ID":"89f27191-1460-4103-a832-acf1b0a8eca1","Type":"ContainerDied","Data":"9f9319897192a3481febde0075f2336b2935d057d9a13f6811945fb00c33176e"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148098 4907 scope.go:117] "RemoveContainer" containerID="f012675ca7dcdeb7509f93233e613fefdb3a4a00c3cef5c3d455bfc70e55795a" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.148234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-55flw" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.158059 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175395 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6gppf" event={"ID":"de23a4c9-a62e-4523-8480-b19f3f10f586","Type":"ContainerDied","Data":"58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175435 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ac30f73a6589f6bdd1c4744cea13d714a24931697d0f9ec2983590de91b92b" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.175497 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6gppf" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.185886 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196182 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196220 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.196233 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-px4wp" event={"ID":"b745a073-e4cf-471d-92ce-ac5da568b38e","Type":"ContainerDied","Data":"8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201879 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8748bf57233d635da5d97a9b78e3315d936eb272eba3df8885742cd6d9b0a7fa" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.201951 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-px4wp" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.210731 4907 scope.go:117] "RemoveContainer" containerID="9372dc1a167de07a6718a73cef4ed28d22b27bf8d1e903c2b77f36fdfb200ef7" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.234681 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.247394 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.258235 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerStarted","Data":"47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28"} Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.264239 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89f27191-1460-4103-a832-acf1b0a8eca1" (UID: "89f27191-1460-4103-a832-acf1b0a8eca1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.296100 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rl9vb" podStartSLOduration=2.544046842 podStartE2EDuration="47.296078012s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:46.803120607 +0000 UTC m=+1321.932403219" lastFinishedPulling="2026-01-27 18:28:31.555151777 +0000 UTC m=+1366.684434389" observedRunningTime="2026-01-27 18:28:32.294233009 +0000 UTC m=+1367.423515631" watchObservedRunningTime="2026-01-27 18:28:32.296078012 +0000 UTC m=+1367.425360614" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.299277 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.299319 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f27191-1460-4103-a832-acf1b0a8eca1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.752252 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.781750 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.795400 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-55flw"] Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810208 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.810367 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") pod \"3d3838ba-a929-4aab-a58d-dd4f39628f00\" (UID: \"3d3838ba-a929-4aab-a58d-dd4f39628f00\") " Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.818076 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l" (OuterVolumeSpecName: "kube-api-access-pmb6l") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "kube-api-access-pmb6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.822981 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.886657 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3838ba-a929-4aab-a58d-dd4f39628f00" (UID: "3d3838ba-a929-4aab-a58d-dd4f39628f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913026 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmb6l\" (UniqueName: \"kubernetes.io/projected/3d3838ba-a929-4aab-a58d-dd4f39628f00-kube-api-access-pmb6l\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913073 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.913086 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d3838ba-a929-4aab-a58d-dd4f39628f00-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.997675 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998268 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998287 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998320 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="init" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998327 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="init" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998346 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998360 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998403 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998411 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: E0127 18:28:32.998421 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998428 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998803 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" containerName="dnsmasq-dns" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" containerName="placement-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998845 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" containerName="barbican-db-sync" Jan 27 18:28:32 crc kubenswrapper[4907]: I0127 18:28:32.998856 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" containerName="keystone-bootstrap" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:32.999850 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.007923 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008250 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008519 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008696 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cspnd" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.008971 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.009665 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.025624 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.040512 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.044290 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.049769 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.054187 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4w9sx" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.066408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.074946 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.109760 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136313 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136434 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136457 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136499 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136536 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136650 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136708 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136807 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136833 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136942 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.136974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.137325 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239843 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239874 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239913 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239954 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.239986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240167 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240257 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.240284 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.244931 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4038dea7-e4ef-436d-baf3-47f8757e3bc0-logs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.247917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-fernet-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.248109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-scripts\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.248323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-credential-keys\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.249704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-scripts\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.251055 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-config-data\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.253951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-public-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.254124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-internal-tls-certs\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.255041 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-combined-ca-bundle\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.260388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-combined-ca-bundle\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.264396 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-public-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.264610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345bd96a-9890-4264-886f-edccc999706b-internal-tls-certs\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.271603 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2xg\" (UniqueName: \"kubernetes.io/projected/345bd96a-9890-4264-886f-edccc999706b-kube-api-access-cb2xg\") pod \"keystone-84847858bd-jp29w\" (UID: \"345bd96a-9890-4264-886f-edccc999706b\") " pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.272764 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5z7\" (UniqueName: \"kubernetes.io/projected/4038dea7-e4ef-436d-baf3-47f8757e3bc0-kube-api-access-kb5z7\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.273336 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4038dea7-e4ef-436d-baf3-47f8757e3bc0-config-data\") pod \"placement-7bb5448674-jfs9k\" (UID: \"4038dea7-e4ef-436d-baf3-47f8757e3bc0\") " pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288778 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x9tl4" event={"ID":"3d3838ba-a929-4aab-a58d-dd4f39628f00","Type":"ContainerDied","Data":"840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288823 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840873a3d56f4dc36cd43eafa6b62d032d44d3c5edf94de7d25cbfc122cc2c74" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.288918 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x9tl4" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.300286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.300341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerStarted","Data":"1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.314902 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerStarted","Data":"407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be"} Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.350424 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.350405647 podStartE2EDuration="6.350405647s" podCreationTimestamp="2026-01-27 18:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:33.349009207 +0000 UTC m=+1368.478291819" watchObservedRunningTime="2026-01-27 18:28:33.350405647 +0000 UTC m=+1368.479688259" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.363279 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.393165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.411631 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.413441 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.432537 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sh25w" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.432901 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.433029 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.462889 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.479367 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.482177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.513497 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.598052 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600633 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600713 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600762 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600798 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600868 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.600953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.602514 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.602492158 podStartE2EDuration="7.602492158s" podCreationTimestamp="2026-01-27 18:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:33.404807285 +0000 UTC m=+1368.534089907" watchObservedRunningTime="2026-01-27 18:28:33.602492158 +0000 UTC m=+1368.731774780" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.652131 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.666233 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: E0127 18:28:33.668935 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3838ba_a929_4aab_a58d_dd4f39628f00.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.684029 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.703893 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704213 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704279 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704326 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704363 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704407 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704461 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704486 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704696 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704782 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704802 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704837 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704876 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.704909 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.705963 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-logs\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.706006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72844033-17b7-4a8b-973d-f8ef443cd529-logs\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.720015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-combined-ca-bundle\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-combined-ca-bundle\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728500 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data-custom\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.728548 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data-custom\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.744744 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-config-data\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.756799 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42z2b\" (UniqueName: \"kubernetes.io/projected/72844033-17b7-4a8b-973d-f8ef443cd529-kube-api-access-42z2b\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.757000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72844033-17b7-4a8b-973d-f8ef443cd529-config-data\") pod \"barbican-worker-cc6c576c9-l5q6m\" (UID: \"72844033-17b7-4a8b-973d-f8ef443cd529\") " pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823303 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823412 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823758 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.823826 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.830089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.831634 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.833772 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.835260 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f27191-1460-4103-a832-acf1b0a8eca1" path="/var/lib/kubelet/pods/89f27191-1460-4103-a832-acf1b0a8eca1/volumes" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.837624 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.839064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.840481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.840948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.855158 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b59hc\" (UniqueName: \"kubernetes.io/projected/06cb3a1d-b998-43fe-8939-29cd2c3fd31f-kube-api-access-b59hc\") pod \"barbican-keystone-listener-5f9fb848c6-w9s7n\" (UID: \"06cb3a1d-b998-43fe-8939-29cd2c3fd31f\") " pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.855375 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.869835 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"dnsmasq-dns-688c87cc99-jlphs\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.936434 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.943352 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" Jan 27 18:28:33 crc kubenswrapper[4907]: I0127 18:28:33.989072 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cc6c576c9-l5q6m" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.000101 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030089 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030162 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030209 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.030362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148764 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148903 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.148983 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.149337 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.157868 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.160749 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.174234 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.252106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"barbican-api-7f5bc66894-v82tp\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.283488 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:34 crc kubenswrapper[4907]: W0127 18:28:34.529070 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod345bd96a_9890_4264_886f_edccc999706b.slice/crio-4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345 WatchSource:0}: Error finding container 4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345: Status 404 returned error can't find the container with id 4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345 Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.568380 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bb5448674-jfs9k"] Jan 27 18:28:34 crc kubenswrapper[4907]: I0127 18:28:34.679500 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84847858bd-jp29w"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.319111 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f9fb848c6-w9s7n"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.362998 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cc6c576c9-l5q6m"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.385233 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84847858bd-jp29w" event={"ID":"345bd96a-9890-4264-886f-edccc999706b","Type":"ContainerStarted","Data":"4322b17eb1f5e680e9801c8681d4ee09095bb03c074b0f4f25cb6bdaa3c61345"} Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.402216 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"ec0522a7da0f218de9fc662607c664bcc16a29abfa0831436a984aa8182bf481"} Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.486221 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:28:35 crc kubenswrapper[4907]: I0127 18:28:35.730185 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.438869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"89aacfe958e6cb0acf3ee1e5bc3aa9599c2b76ad6a48366d7a127a5e6ba2f087"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.439241 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bb5448674-jfs9k" event={"ID":"4038dea7-e4ef-436d-baf3-47f8757e3bc0","Type":"ContainerStarted","Data":"6d6069b81341f23c84eb8cbe50424ddc2390c73536fc5d8a699801d5814ef5ac"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.439631 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.442863 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84847858bd-jp29w" event={"ID":"345bd96a-9890-4264-886f-edccc999706b","Type":"ContainerStarted","Data":"4d33d268c6a75e4b39e48513478196cb131df437afee47998e1a30f151ba37c8"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.443569 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449534 4907 generic.go:334] "Generic (PLEG): container finished" podID="4eb40734-63ad-481e-8830-da770faf9a95" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" exitCode=0 Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.449709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerStarted","Data":"9696ceb9132fbe70ed57d119137a4111952838c23ff4de6225536d7aaf063783"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.458433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.458493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"23d12280d966edcb48634c6a46a4a55471151b9b7f710d4d1606e12380bba1d5"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.465546 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"63052a945af669aa2abed93c574f5734b705d5b0401841e6a5ca4ab454c90357"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.482465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"1a7df1f120bf2f34614d97fd9989a69f3305129b2ef83fa48dd4227d716d63ea"} Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.486777 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7bb5448674-jfs9k" podStartSLOduration=4.486750465 podStartE2EDuration="4.486750465s" podCreationTimestamp="2026-01-27 18:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:36.46492784 +0000 UTC m=+1371.594210462" watchObservedRunningTime="2026-01-27 18:28:36.486750465 +0000 UTC m=+1371.616033077" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.587236 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84847858bd-jp29w" podStartSLOduration=4.587215013 podStartE2EDuration="4.587215013s" podCreationTimestamp="2026-01-27 18:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:36.556205635 +0000 UTC m=+1371.685488247" watchObservedRunningTime="2026-01-27 18:28:36.587215013 +0000 UTC m=+1371.716497625" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.673012 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.673072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.765532 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:36 crc kubenswrapper[4907]: I0127 18:28:36.767521 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.512724 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerStarted","Data":"58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414"} Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.513073 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.513100 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.514004 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.693283 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f5bc66894-v82tp" podStartSLOduration=4.693263399 podStartE2EDuration="4.693263399s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:37.534166791 +0000 UTC m=+1372.663449543" watchObservedRunningTime="2026-01-27 18:28:37.693263399 +0000 UTC m=+1372.822546011" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.704272 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.706023 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.708246 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.711723 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.721600 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.762657 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.762698 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788828 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.788975 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789363 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.789484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.817178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.821093 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892157 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892285 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892350 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892441 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.892536 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.893994 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-logs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.904299 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-internal-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.911867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data-custom\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912327 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-public-tls-certs\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-combined-ca-bundle\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.912922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-config-data\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:37 crc kubenswrapper[4907]: I0127 18:28:37.954283 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g786j\" (UniqueName: \"kubernetes.io/projected/eb7e48e3-f92d-4ee4-9074-9e035a54c8dc-kube-api-access-g786j\") pod \"barbican-api-6c7bdc78db-g6vvs\" (UID: \"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc\") " pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.038633 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.522734 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523256 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523279 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:38 crc kubenswrapper[4907]: I0127 18:28:38.523295 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.398286 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7bdc78db-g6vvs"] Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.543074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerStarted","Data":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.550071 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"149b34e0b046759fa665fe2421ca129e91c5f6a2a4c5f345aef8965e55ca7275"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.560664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"0b95f02ae9ee3d72259d4e9811477be48d031b6c298c7518e8906db81acf8674"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.570150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"4195b724a8b5a24a8176a944768a7ddfe9eb962c0dbb1e8ebea60589295d1a54"} Jan 27 18:28:39 crc kubenswrapper[4907]: I0127 18:28:39.597356 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" podStartSLOduration=6.597333575 podStartE2EDuration="6.597333575s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:39.56851057 +0000 UTC m=+1374.697793182" watchObservedRunningTime="2026-01-27 18:28:39.597333575 +0000 UTC m=+1374.726616187" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.601392 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cc6c576c9-l5q6m" event={"ID":"72844033-17b7-4a8b-973d-f8ef443cd529","Type":"ContainerStarted","Data":"d5365e972db180c3c14949f9027ca94f643322944433a6f5db8a03398d64eb46"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"8eb0dd15d856d236ff614dc3060a2dbfd45bb56180b7a264db0f960b52fa40a8"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603177 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7bdc78db-g6vvs" event={"ID":"eb7e48e3-f92d-4ee4-9074-9e035a54c8dc","Type":"ContainerStarted","Data":"30888fffed5c0d8cb46746b9164496c9ea600762ab5c1f070b4569ae375a32f7"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.603735 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.607336 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" event={"ID":"06cb3a1d-b998-43fe-8939-29cd2c3fd31f","Type":"ContainerStarted","Data":"ea147e6f6cbea0983451774119d0f87c9f47e13823e7e13b3339b578eac6d575"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609716 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609741 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.609807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerStarted","Data":"3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87"} Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.610115 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.635780 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-cc6c576c9-l5q6m" podStartSLOduration=4.306250852 podStartE2EDuration="7.635760224s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="2026-01-27 18:28:35.394262679 +0000 UTC m=+1370.523545291" lastFinishedPulling="2026-01-27 18:28:38.723772051 +0000 UTC m=+1373.853054663" observedRunningTime="2026-01-27 18:28:40.630893395 +0000 UTC m=+1375.760176017" watchObservedRunningTime="2026-01-27 18:28:40.635760224 +0000 UTC m=+1375.765042836" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.656515 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f9fb848c6-w9s7n" podStartSLOduration=4.252700118 podStartE2EDuration="7.656500498s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="2026-01-27 18:28:35.394490866 +0000 UTC m=+1370.523773478" lastFinishedPulling="2026-01-27 18:28:38.798291246 +0000 UTC m=+1373.927573858" observedRunningTime="2026-01-27 18:28:40.656055345 +0000 UTC m=+1375.785337957" watchObservedRunningTime="2026-01-27 18:28:40.656500498 +0000 UTC m=+1375.785783100" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.700396 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kbngs" podStartSLOduration=6.405071701 podStartE2EDuration="55.700376375s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.196197467 +0000 UTC m=+1322.325480079" lastFinishedPulling="2026-01-27 18:28:36.491502141 +0000 UTC m=+1371.620784753" observedRunningTime="2026-01-27 18:28:40.689255547 +0000 UTC m=+1375.818538149" watchObservedRunningTime="2026-01-27 18:28:40.700376375 +0000 UTC m=+1375.829658987" Jan 27 18:28:40 crc kubenswrapper[4907]: I0127 18:28:40.724862 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c7bdc78db-g6vvs" podStartSLOduration=3.724831316 podStartE2EDuration="3.724831316s" podCreationTimestamp="2026-01-27 18:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:40.722618632 +0000 UTC m=+1375.851901244" watchObservedRunningTime="2026-01-27 18:28:40.724831316 +0000 UTC m=+1375.854113918" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.157207 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.157706 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.161265 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.161397 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.163888 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:28:42 crc kubenswrapper[4907]: I0127 18:28:42.164236 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:28:43 crc kubenswrapper[4907]: I0127 18:28:43.646394 4907 generic.go:334] "Generic (PLEG): container finished" podID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerID="47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28" exitCode=0 Jan 27 18:28:43 crc kubenswrapper[4907]: I0127 18:28:43.646658 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerDied","Data":"47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28"} Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.002474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.097482 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.097803 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" containerID="cri-o://86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" gracePeriod=10 Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.671705 4907 generic.go:334] "Generic (PLEG): container finished" podID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerID="86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" exitCode=0 Jan 27 18:28:44 crc kubenswrapper[4907]: I0127 18:28:44.671990 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f"} Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.020981 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: connect: connection refused" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.820361 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.823631 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.911456 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.912202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.912373 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") pod \"90ffb508-65d2-4c20-95db-209a1c9a3399\" (UID: \"90ffb508-65d2-4c20-95db-209a1c9a3399\") " Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.919300 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z" (OuterVolumeSpecName: "kube-api-access-hk57z") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "kube-api-access-hk57z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:45 crc kubenswrapper[4907]: I0127 18:28:45.947237 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.015829 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk57z\" (UniqueName: \"kubernetes.io/projected/90ffb508-65d2-4c20-95db-209a1c9a3399-kube-api-access-hk57z\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.015872 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.039342 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data" (OuterVolumeSpecName: "config-data") pod "90ffb508-65d2-4c20-95db-209a1c9a3399" (UID: "90ffb508-65d2-4c20-95db-209a1c9a3399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.119853 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ffb508-65d2-4c20-95db-209a1c9a3399-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.258619 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.499805 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.698643 4907 generic.go:334] "Generic (PLEG): container finished" podID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerID="3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87" exitCode=0 Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.698735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerDied","Data":"3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87"} Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703797 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rl9vb" Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rl9vb" event={"ID":"90ffb508-65d2-4c20-95db-209a1c9a3399","Type":"ContainerDied","Data":"fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c"} Jan 27 18:28:46 crc kubenswrapper[4907]: I0127 18:28:46.703864 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc4f516e379ca6129c715c0b9a600b0f5b1d171146eae6a53e9f72e6f97ae48c" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.750569 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbngs" event={"ID":"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c","Type":"ContainerDied","Data":"f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7"} Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.751080 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f268f2f373629b4712746f03d9c3a8027b21b68587212bf16359f8e777653bf7" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.759870 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" event={"ID":"026a73e3-c86f-49dd-b04d-e8208c9ce9e2","Type":"ContainerDied","Data":"078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b"} Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.760151 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078eb3921a7645a6b0c598308484ce9d504153fe66f1bf90605b880a55507b5b" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.842430 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.871004 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897695 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897733 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897812 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897848 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.897924 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") pod \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\" (UID: \"fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c\") " Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.898673 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.900427 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.910731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl" (OuterVolumeSpecName: "kube-api-access-2r4jl") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "kube-api-access-2r4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.913990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.919679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts" (OuterVolumeSpecName: "scripts") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.954586 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:48 crc kubenswrapper[4907]: I0127 18:28:48.992969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data" (OuterVolumeSpecName: "config-data") pod "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" (UID: "fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001369 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001811 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.001928 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002172 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002286 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.002395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") pod \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\" (UID: \"026a73e3-c86f-49dd-b04d-e8208c9ce9e2\") " Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003154 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r4jl\" (UniqueName: \"kubernetes.io/projected/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-kube-api-access-2r4jl\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003257 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003336 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003415 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.003492 4907 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.011950 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz" (OuterVolumeSpecName: "kube-api-access-cr2nz") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "kube-api-access-cr2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.077517 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.097923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.098911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.100819 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105285 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105319 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105331 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105340 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr2nz\" (UniqueName: \"kubernetes.io/projected/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-kube-api-access-cr2nz\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.105350 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.106038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config" (OuterVolumeSpecName: "config") pod "026a73e3-c86f-49dd-b04d-e8208c9ce9e2" (UID: "026a73e3-c86f-49dd-b04d-e8208c9ce9e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.207153 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a73e3-c86f-49dd-b04d-e8208c9ce9e2-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.593627 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.710794 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7bdc78db-g6vvs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800096 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbngs" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800766 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerStarted","Data":"afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa"} Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.800918 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-knq4v" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.804962 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" containerID="cri-o://696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805124 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" containerID="cri-o://afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805179 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" containerID="cri-o://eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.805220 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" containerID="cri-o://e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.810447 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.811758 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" containerID="cri-o://086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.811926 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" containerID="cri-o://58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" gracePeriod=30 Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.822275 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": EOF" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.850277 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.541452926 podStartE2EDuration="1m4.850260366s" podCreationTimestamp="2026-01-27 18:27:45 +0000 UTC" firstStartedPulling="2026-01-27 18:27:47.328361564 +0000 UTC m=+1322.457644176" lastFinishedPulling="2026-01-27 18:28:48.637169004 +0000 UTC m=+1383.766451616" observedRunningTime="2026-01-27 18:28:49.843637657 +0000 UTC m=+1384.972920269" watchObservedRunningTime="2026-01-27 18:28:49.850260366 +0000 UTC m=+1384.979542978" Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.905634 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:49 crc kubenswrapper[4907]: I0127 18:28:49.970629 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-knq4v"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.123004 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.174691 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175156 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175172 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175205 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="init" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175212 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="init" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175221 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175227 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: E0127 18:28:50.175257 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175263 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175467 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" containerName="heat-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175490 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" containerName="dnsmasq-dns" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.175501 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" containerName="cinder-db-sync" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.176801 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182207 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182382 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182488 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.182607 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vbpkv" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.207384 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.209802 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.217138 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236281 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236429 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.236682 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.237667 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.339773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340100 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340250 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340366 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340394 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340440 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340457 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.340548 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.351733 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.354261 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.389213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.389989 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.390247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.399155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"cinder-scheduler-0\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445078 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445160 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445210 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.445248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.446302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.446326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.447265 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.463538 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.465297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.469432 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.480318 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.483512 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"dnsmasq-dns-6bb4fc677f-sksgl\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.493170 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.493276 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.503455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.540102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551814 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551880 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551919 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.551939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552033 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552056 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.552077 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.657824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658534 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658621 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658761 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.658783 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.659125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.662321 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.677945 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.685012 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.691543 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.698043 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.698520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.791103 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.791324 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" containerID="cri-o://8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" gracePeriod=30 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.794071 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" containerID="cri-o://5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" gracePeriod=30 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.811599 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.825334 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerID="086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" exitCode=143 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.825394 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866494 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" exitCode=0 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866523 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" exitCode=2 Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.866585 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed"} Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.905626 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.913444 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.934068 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.980622 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981050 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981170 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981334 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:50 crc kubenswrapper[4907]: I0127 18:28:50.981439 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083803 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083945 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.083967 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084040 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.084110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.102393 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-ovndb-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.104317 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-combined-ca-bundle\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.112363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-httpd-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.112991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-config\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.116781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-public-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.116915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb34862c-021c-4e5e-b4c0-ceffb9222438-internal-tls-certs\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.126859 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcvj\" (UniqueName: \"kubernetes.io/projected/eb34862c-021c-4e5e-b4c0-ceffb9222438-kube-api-access-zlcvj\") pod \"neutron-74c6c685b5-88m65\" (UID: \"eb34862c-021c-4e5e-b4c0-ceffb9222438\") " pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.166400 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.174735 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": read tcp 10.217.0.2:45666->10.217.0.195:9696: read: connection reset by peer" Jan 27 18:28:51 crc kubenswrapper[4907]: W0127 18:28:51.536261 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c24523b_b339_4889_9af6_19c8ec0b1048.slice/crio-6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7 WatchSource:0}: Error finding container 6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7: Status 404 returned error can't find the container with id 6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7 Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.543231 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.676970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.806019 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026a73e3-c86f-49dd-b04d-e8208c9ce9e2" path="/var/lib/kubelet/pods/026a73e3-c86f-49dd-b04d-e8208c9ce9e2/volumes" Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.806874 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.831675 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6c685b5-88m65"] Jan 27 18:28:51 crc kubenswrapper[4907]: W0127 18:28:51.882382 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb34862c_021c_4e5e_b4c0_ceffb9222438.slice/crio-e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9 WatchSource:0}: Error finding container e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9: Status 404 returned error can't find the container with id e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9 Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.923830 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7"} Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.957040 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerStarted","Data":"9302d404bfba0eee2d8da4cca550efe55dc895f1186f50f082b7659191ac1d96"} Jan 27 18:28:51 crc kubenswrapper[4907]: I0127 18:28:51.975479 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"d0dcb4cda3e336fc0ccb50a2d68dd9abd7cba58c7276a287ff928b11ff880a1e"} Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.012609 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" exitCode=0 Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.012669 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7"} Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.025865 4907 generic.go:334] "Generic (PLEG): container finished" podID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" exitCode=0 Jan 27 18:28:52 crc kubenswrapper[4907]: I0127 18:28:52.025913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.055966 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54487fdc5c-ktzbt" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": dial tcp 10.217.0.195:9696: connect: connection refused" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061749 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"ae5602f58ce30fbc6eef4a10ecf3cb4b9657ca313c72b5dfd16c2fed7fc614ae"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061798 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"b1ac04422082733d676bd4d7806ca0f76ee46da4044056870033680fca782d4b"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.061811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6c685b5-88m65" event={"ID":"eb34862c-021c-4e5e-b4c0-ceffb9222438","Type":"ContainerStarted","Data":"e0ddfcc6c88620984547a9230aa3327176c40fff52280cc01bebc43a8ee678f9"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.076710 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.107634 4907 generic.go:334] "Generic (PLEG): container finished" podID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerID="a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b" exitCode=0 Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.107917 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.131380 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.153267 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c6c685b5-88m65" podStartSLOduration=3.153247118 podStartE2EDuration="3.153247118s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:53.123160746 +0000 UTC m=+1388.252443358" watchObservedRunningTime="2026-01-27 18:28:53.153247118 +0000 UTC m=+1388.282529730" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.263036 4907 generic.go:334] "Generic (PLEG): container finished" podID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerID="e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" exitCode=0 Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.263298 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1"} Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.319587 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.346819 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388819 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388866 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.388927 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389090 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389106 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.389682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.407971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.428776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27" (OuterVolumeSpecName: "kube-api-access-zmv27") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "kube-api-access-zmv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.449089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts" (OuterVolumeSpecName: "scripts") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495316 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmv27\" (UniqueName: \"kubernetes.io/projected/472bdc20-aa30-4204-b7ef-ef2604ebc83f-kube-api-access-zmv27\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495343 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495353 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.495363 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/472bdc20-aa30-4204-b7ef-ef2604ebc83f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.514333 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.597262 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:53 crc kubenswrapper[4907]: E0127 18:28:53.630316 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle podName:472bdc20-aa30-4204-b7ef-ef2604ebc83f nodeName:}" failed. No retries permitted until 2026-01-27 18:28:54.130289874 +0000 UTC m=+1389.259572486 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f") : error deleting /var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volume-subpaths: remove /var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volume-subpaths: no such file or directory Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.637659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data" (OuterVolumeSpecName: "config-data") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:53 crc kubenswrapper[4907]: I0127 18:28:53.699790 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.498810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") pod \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\" (UID: \"472bdc20-aa30-4204-b7ef-ef2604ebc83f\") " Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.525620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "472bdc20-aa30-4204-b7ef-ef2604ebc83f" (UID: "472bdc20-aa30-4204-b7ef-ef2604ebc83f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.550534 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerStarted","Data":"09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.550604 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"472bdc20-aa30-4204-b7ef-ef2604ebc83f","Type":"ContainerDied","Data":"f21eb03c891c1ca372e748b6131d8a4413d1eb5b66cad03fa9fdb685e87a089a"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562352 4907 scope.go:117] "RemoveContainer" containerID="afaf636821c79810532f3296fc1c35116eff1340a7fa9e9d898aadd7f5366aaa" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.562381 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.576069 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73"} Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.598336 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podStartSLOduration=4.598314195 podStartE2EDuration="4.598314195s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:54.576283424 +0000 UTC m=+1389.705566046" watchObservedRunningTime="2026-01-27 18:28:54.598314195 +0000 UTC m=+1389.727596807" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.601449 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472bdc20-aa30-4204-b7ef-ef2604ebc83f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.684800 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.693703 4907 scope.go:117] "RemoveContainer" containerID="eef493d8727c7561af3141a785f705db6a5238f850d803d437631d17aed992ed" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.711883 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.734288 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735009 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735031 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735055 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735066 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735093 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735102 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: E0127 18:28:54.735140 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735149 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735478 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-notification-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735499 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="proxy-httpd" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735522 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="ceilometer-central-agent" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.735535 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" containerName="sg-core" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.739250 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.742509 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.742570 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.745223 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.782757 4907 scope.go:117] "RemoveContainer" containerID="e31ac7857a0a7cf939ca1305a14e857b669105350be60af9d31c438d388f56e1" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807218 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807433 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807484 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.807661 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909666 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909804 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909864 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.909901 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.910337 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.911718 4907 scope.go:117] "RemoveContainer" containerID="696acc4cb63963503279e3a1b33ea3557463eec6057dd239e03b27593a53f0f7" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.913933 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.923274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.930658 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.942691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.944008 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:54 crc kubenswrapper[4907]: I0127 18:28:54.951810 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"ceilometer-0\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " pod="openstack/ceilometer-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.172938 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.359718 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:36646->10.217.0.203:9311: read: connection reset by peer" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.360280 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5bc66894-v82tp" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:36656->10.217.0.203:9311: read: connection reset by peer" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.532495 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.595659 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerID="58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" exitCode=0 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.595754 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.598503 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerStarted","Data":"270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.602907 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerStarted","Data":"6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603031 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" containerID="cri-o://3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" gracePeriod=30 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603289 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.603320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" containerID="cri-o://6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" gracePeriod=30 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.607673 4907 generic.go:334] "Generic (PLEG): container finished" podID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" exitCode=0 Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608722 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54487fdc5c-ktzbt" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54487fdc5c-ktzbt" event={"ID":"18fa0523-c08a-427c-b27e-77543fe4bd94","Type":"ContainerDied","Data":"8568e05f8defdc45da4b5e782f2fecb663f59f14a17d880cf92f38ca7f0d7c34"} Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.608985 4907 scope.go:117] "RemoveContainer" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.620238 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.573054781 podStartE2EDuration="5.62021594s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="2026-01-27 18:28:51.540549998 +0000 UTC m=+1386.669832610" lastFinishedPulling="2026-01-27 18:28:52.587711157 +0000 UTC m=+1387.716993769" observedRunningTime="2026-01-27 18:28:55.619124609 +0000 UTC m=+1390.748407221" watchObservedRunningTime="2026-01-27 18:28:55.62021594 +0000 UTC m=+1390.749498572" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.663073 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.663048677 podStartE2EDuration="5.663048677s" podCreationTimestamp="2026-01-27 18:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:55.654998627 +0000 UTC m=+1390.784281239" watchObservedRunningTime="2026-01-27 18:28:55.663048677 +0000 UTC m=+1390.792331289" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.672987 4907 scope.go:117] "RemoveContainer" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.719840 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.727776 4907 scope.go:117] "RemoveContainer" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: E0127 18:28:55.729129 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": container with ID starting with 5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a not found: ID does not exist" containerID="5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.729187 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a"} err="failed to get container status \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": rpc error: code = NotFound desc = could not find container \"5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a\": container with ID starting with 5b7028635c5489a0f1f7919f08005d946ca654f3ed45cb6db86312dcbdb56e7a not found: ID does not exist" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.729219 4907 scope.go:117] "RemoveContainer" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: E0127 18:28:55.730089 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": container with ID starting with 8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78 not found: ID does not exist" containerID="8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.730119 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78"} err="failed to get container status \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": rpc error: code = NotFound desc = could not find container \"8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78\": container with ID starting with 8a374bf2b3408d9d2e4b7a861d8a3c24ddb17a960cb08e0ddc087416e796ba78 not found: ID does not exist" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734356 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734444 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734469 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734600 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.734715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") pod \"18fa0523-c08a-427c-b27e-77543fe4bd94\" (UID: \"18fa0523-c08a-427c-b27e-77543fe4bd94\") " Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.750911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7" (OuterVolumeSpecName: "kube-api-access-zt9q7") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "kube-api-access-zt9q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.754667 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.834754 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472bdc20-aa30-4204-b7ef-ef2604ebc83f" path="/var/lib/kubelet/pods/472bdc20-aa30-4204-b7ef-ef2604ebc83f/volumes" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.840321 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.840594 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9q7\" (UniqueName: \"kubernetes.io/projected/18fa0523-c08a-427c-b27e-77543fe4bd94-kube-api-access-zt9q7\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.852045 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.884359 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.887374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config" (OuterVolumeSpecName: "config") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.932397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943245 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943590 4907 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943625 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.943639 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[4907]: I0127 18:28:55.949440 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18fa0523-c08a-427c-b27e-77543fe4bd94" (UID: "18fa0523-c08a-427c-b27e-77543fe4bd94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.042576 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.045132 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18fa0523-c08a-427c-b27e-77543fe4bd94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.149771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150274 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150410 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150472 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.150586 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") pod \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\" (UID: \"9c425059-b69d-4bf6-ab4b-3c942d87f1a3\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.151890 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs" (OuterVolumeSpecName: "logs") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.172658 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.194820 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t" (OuterVolumeSpecName: "kube-api-access-8nn5t") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "kube-api-access-8nn5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.252856 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.262610 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nn5t\" (UniqueName: \"kubernetes.io/projected/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-kube-api-access-8nn5t\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.262860 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.263571 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.338138 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data" (OuterVolumeSpecName: "config-data") pod "9c425059-b69d-4bf6-ab4b-3c942d87f1a3" (UID: "9c425059-b69d-4bf6-ab4b-3c942d87f1a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.355929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:56 crc kubenswrapper[4907]: E0127 18:28:56.368721 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35073d6_fb6a_4896_8275_9f3632f0cd2f.slice/crio-conmon-6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35073d6_fb6a_4896_8275_9f3632f0cd2f.slice/crio-6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.381306 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.381348 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c425059-b69d-4bf6-ab4b-3c942d87f1a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.385580 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54487fdc5c-ktzbt"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.631813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.632257 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"0dcda993740684fc7c475d953a4c78e14e7f5c8fbe1eb87431b09fbc9bf63899"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637222 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5bc66894-v82tp" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637699 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5bc66894-v82tp" event={"ID":"9c425059-b69d-4bf6-ab4b-3c942d87f1a3","Type":"ContainerDied","Data":"23d12280d966edcb48634c6a46a4a55471151b9b7f710d4d1606e12380bba1d5"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.637770 4907 scope.go:117] "RemoveContainer" containerID="58b4cfef266bd7b14e986ac5aa8ba1668d9a55c3b795b4ca16c8af1b76881414" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639517 4907 generic.go:334] "Generic (PLEG): container finished" podID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerID="6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" exitCode=0 Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639539 4907 generic.go:334] "Generic (PLEG): container finished" podID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerID="3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" exitCode=143 Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639695 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.639717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320"} Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.731774 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.748441 4907 scope.go:117] "RemoveContainer" containerID="086cdd09c4fca79b9f7d44c131d3c05be3d0079630d34aba8bcd95bc6219fab2" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.761852 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.781600 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f5bc66894-v82tp"] Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796617 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796672 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796721 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796817 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.796924 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797143 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") pod \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\" (UID: \"f35073d6-fb6a-4896-8275-9f3632f0cd2f\") " Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.797489 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.798250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs" (OuterVolumeSpecName: "logs") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.800316 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f35073d6-fb6a-4896-8275-9f3632f0cd2f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.800345 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35073d6-fb6a-4896-8275-9f3632f0cd2f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.814784 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq" (OuterVolumeSpecName: "kube-api-access-7dfjq") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "kube-api-access-7dfjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.817844 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts" (OuterVolumeSpecName: "scripts") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.839573 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.882271 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data" (OuterVolumeSpecName: "config-data") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902081 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902122 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902134 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dfjq\" (UniqueName: \"kubernetes.io/projected/f35073d6-fb6a-4896-8275-9f3632f0cd2f-kube-api-access-7dfjq\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.902147 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[4907]: I0127 18:28:56.944857 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35073d6-fb6a-4896-8275-9f3632f0cd2f" (UID: "f35073d6-fb6a-4896-8275-9f3632f0cd2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.004629 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35073d6-fb6a-4896-8275-9f3632f0cd2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658493 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f35073d6-fb6a-4896-8275-9f3632f0cd2f","Type":"ContainerDied","Data":"d0dcb4cda3e336fc0ccb50a2d68dd9abd7cba58c7276a287ff928b11ff880a1e"} Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658799 4907 scope.go:117] "RemoveContainer" containerID="6b4b43ee78df03df8e23dbdfa4e9af20a0fba7affbc188a37645423cefb167ef" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.658535 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.674040 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9"} Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.687212 4907 scope.go:117] "RemoveContainer" containerID="3c651502395bfc5ae2b1b1e36cc5928717601b3d76447da17fe2507e0fb60320" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.709094 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.733754 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.788052 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" path="/var/lib/kubelet/pods/18fa0523-c08a-427c-b27e-77543fe4bd94/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.805399 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" path="/var/lib/kubelet/pods/9c425059-b69d-4bf6-ab4b-3c942d87f1a3/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.812522 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" path="/var/lib/kubelet/pods/f35073d6-fb6a-4896-8275-9f3632f0cd2f/volumes" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.813446 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.814409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.814908 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.815057 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.815115 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.818987 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819111 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819193 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819264 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819472 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: E0127 18:28:57.819608 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.819704 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820678 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820793 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820886 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.820971 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35073d6-fb6a-4896-8275-9f3632f0cd2f" containerName="cinder-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.821081 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fa0523-c08a-427c-b27e-77543fe4bd94" containerName="neutron-httpd" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.821168 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c425059-b69d-4bf6-ab4b-3c942d87f1a3" containerName="barbican-api-log" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.826027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.826222 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829419 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829531 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.829435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943598 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943662 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943712 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943780 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943810 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943855 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.943883 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.944008 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:57 crc kubenswrapper[4907]: I0127 18:28:57.944036 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046537 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046606 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046631 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046651 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046685 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046743 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046769 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046858 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.046998 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62bd754-7667-406a-9883-2015ddcc3f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.047297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62bd754-7667-406a-9883-2015ddcc3f16-logs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.050997 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.051209 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.051664 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.053490 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.054088 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-config-data\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.062567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62bd754-7667-406a-9883-2015ddcc3f16-scripts\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.067153 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx45\" (UniqueName: \"kubernetes.io/projected/f62bd754-7667-406a-9883-2015ddcc3f16-kube-api-access-rxx45\") pod \"cinder-api-0\" (UID: \"f62bd754-7667-406a-9883-2015ddcc3f16\") " pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.153410 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.647858 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.684273 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"612ef568f992f05d9102a33aa03ceb4dd3f384fd3f8ed31e2abf3a65c0144486"} Jan 27 18:28:58 crc kubenswrapper[4907]: I0127 18:28:58.687616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f"} Jan 27 18:28:59 crc kubenswrapper[4907]: I0127 18:28:59.706081 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"42ed3b791bfec87220988dd112bce9c37d8fc35aeb5c1ffe2f82f2eda67cbffd"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.504157 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.542721 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.622589 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.622829 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" containerID="cri-o://a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" gracePeriod=10 Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.738401 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62bd754-7667-406a-9883-2015ddcc3f16","Type":"ContainerStarted","Data":"5a595c94eabb08aabaf4e564cad10dc747a594f72a3a9c5050bb39af7be3d027"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.739932 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.755873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerStarted","Data":"b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6"} Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.756832 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.795383 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.795365334 podStartE2EDuration="3.795365334s" podCreationTimestamp="2026-01-27 18:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:00.791600697 +0000 UTC m=+1395.920883309" watchObservedRunningTime="2026-01-27 18:29:00.795365334 +0000 UTC m=+1395.924647946" Jan 27 18:29:00 crc kubenswrapper[4907]: I0127 18:29:00.847249 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.980145508 podStartE2EDuration="6.84723014s" podCreationTimestamp="2026-01-27 18:28:54 +0000 UTC" firstStartedPulling="2026-01-27 18:28:55.747648561 +0000 UTC m=+1390.876931173" lastFinishedPulling="2026-01-27 18:28:59.614733193 +0000 UTC m=+1394.744015805" observedRunningTime="2026-01-27 18:29:00.846455348 +0000 UTC m=+1395.975737970" watchObservedRunningTime="2026-01-27 18:29:00.84723014 +0000 UTC m=+1395.976512752" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.083743 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.186658 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.748947 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768018 4907 generic.go:334] "Generic (PLEG): container finished" podID="4eb40734-63ad-481e-8830-da770faf9a95" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" exitCode=0 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768114 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-jlphs" event={"ID":"4eb40734-63ad-481e-8830-da770faf9a95","Type":"ContainerDied","Data":"9696ceb9132fbe70ed57d119137a4111952838c23ff4de6225536d7aaf063783"} Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.768244 4907 scope.go:117] "RemoveContainer" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.769097 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" containerID="cri-o://f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" gracePeriod=30 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.769250 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" containerID="cri-o://270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" gracePeriod=30 Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835255 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835313 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835360 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835453 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.835489 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") pod \"4eb40734-63ad-481e-8830-da770faf9a95\" (UID: \"4eb40734-63ad-481e-8830-da770faf9a95\") " Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.840273 4907 scope.go:117] "RemoveContainer" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.847325 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt" (OuterVolumeSpecName: "kube-api-access-4thvt") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "kube-api-access-4thvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.919911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.930505 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.943137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944020 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thvt\" (UniqueName: \"kubernetes.io/projected/4eb40734-63ad-481e-8830-da770faf9a95-kube-api-access-4thvt\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944054 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944065 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.944073 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.950242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config" (OuterVolumeSpecName: "config") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:01 crc kubenswrapper[4907]: I0127 18:29:01.965070 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4eb40734-63ad-481e-8830-da770faf9a95" (UID: "4eb40734-63ad-481e-8830-da770faf9a95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.047026 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.047066 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb40734-63ad-481e-8830-da770faf9a95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055006 4907 scope.go:117] "RemoveContainer" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:02 crc kubenswrapper[4907]: E0127 18:29:02.055769 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": container with ID starting with a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8 not found: ID does not exist" containerID="a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055825 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8"} err="failed to get container status \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": rpc error: code = NotFound desc = could not find container \"a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8\": container with ID starting with a72a7d8fbe73a235e1f9de3be72c06d90d7106563dda742cfda42de0618550e8 not found: ID does not exist" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.055860 4907 scope.go:117] "RemoveContainer" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:02 crc kubenswrapper[4907]: E0127 18:29:02.056237 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": container with ID starting with 9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f not found: ID does not exist" containerID="9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.056260 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f"} err="failed to get container status \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": rpc error: code = NotFound desc = could not find container \"9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f\": container with ID starting with 9d2a3a998744436098e95e9557eebcbc85270296883cf346fc8fcdd3970b9b6f not found: ID does not exist" Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.109258 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.119701 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-jlphs"] Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.787240 4907 generic.go:334] "Generic (PLEG): container finished" podID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerID="270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" exitCode=0 Jan 27 18:29:02 crc kubenswrapper[4907]: I0127 18:29:02.787350 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1"} Jan 27 18:29:03 crc kubenswrapper[4907]: I0127 18:29:03.789475 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb40734-63ad-481e-8830-da770faf9a95" path="/var/lib/kubelet/pods/4eb40734-63ad-481e-8830-da770faf9a95/volumes" Jan 27 18:29:04 crc kubenswrapper[4907]: I0127 18:29:04.720454 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:29:04 crc kubenswrapper[4907]: I0127 18:29:04.725078 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bb5448674-jfs9k" Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.210336 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84847858bd-jp29w" Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.839064 4907 generic.go:334] "Generic (PLEG): container finished" podID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerID="f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" exitCode=0 Jan 27 18:29:05 crc kubenswrapper[4907]: I0127 18:29:05.839282 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73"} Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.126344 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252464 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252620 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.252810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.253107 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.253525 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") pod \"2c24523b-b339-4889-9af6-19c8ec0b1048\" (UID: \"2c24523b-b339-4889-9af6-19c8ec0b1048\") " Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.256322 4907 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c24523b-b339-4889-9af6-19c8ec0b1048-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.258895 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7" (OuterVolumeSpecName: "kube-api-access-zwkv7") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "kube-api-access-zwkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.258966 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts" (OuterVolumeSpecName: "scripts") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.263614 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.339492 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358196 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358235 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkv7\" (UniqueName: \"kubernetes.io/projected/2c24523b-b339-4889-9af6-19c8ec0b1048-kube-api-access-zwkv7\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358249 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.358260 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.387250 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data" (OuterVolumeSpecName: "config-data") pod "2c24523b-b339-4889-9af6-19c8ec0b1048" (UID: "2c24523b-b339-4889-9af6-19c8ec0b1048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.460240 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c24523b-b339-4889-9af6-19c8ec0b1048-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850387 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c24523b-b339-4889-9af6-19c8ec0b1048","Type":"ContainerDied","Data":"6c912c299866222b6abf6077cbfac63bac53cc6ffdbda480cbf61679600530d7"} Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850454 4907 scope.go:117] "RemoveContainer" containerID="270c07cec0dbfa99788bb14621f9eb90695be925480521d822e999afab8c2bc1" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.850489 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.882135 4907 scope.go:117] "RemoveContainer" containerID="f9cb0039f19cc84dfc9b335f33bef26200be34801f227ec8176daa1111a7aa73" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.909939 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.927604 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.948871 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949398 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949414 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949443 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949451 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949469 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949478 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: E0127 18:29:06.949526 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="init" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949534 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="init" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949794 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="probe" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949824 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb40734-63ad-481e-8830-da770faf9a95" containerName="dnsmasq-dns" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.949843 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" containerName="cinder-scheduler" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.951305 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.954634 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:29:06 crc kubenswrapper[4907]: I0127 18:29:06.964548 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.072587 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.072977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073132 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073296 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073403 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.073591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175504 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175591 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175796 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175834 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.175870 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.176069 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/621bccf6-c3e9-4b2d-821b-217848191c27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.180096 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-scripts\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.180114 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.184077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.185156 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621bccf6-c3e9-4b2d-821b-217848191c27-config-data\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.201063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk5tm\" (UniqueName: \"kubernetes.io/projected/621bccf6-c3e9-4b2d-821b-217848191c27-kube-api-access-kk5tm\") pod \"cinder-scheduler-0\" (UID: \"621bccf6-c3e9-4b2d-821b-217848191c27\") " pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.275485 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.761844 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c24523b-b339-4889-9af6-19c8ec0b1048" path="/var/lib/kubelet/pods/2c24523b-b339-4889-9af6-19c8ec0b1048/volumes" Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.814166 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:29:07 crc kubenswrapper[4907]: I0127 18:29:07.868024 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"80e1b77affd59551c53daf19ceda8c74435235ef679c23448735290450f5e301"} Jan 27 18:29:08 crc kubenswrapper[4907]: I0127 18:29:08.885324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.136850 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.138362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147217 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147436 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.147601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kzl48" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.184505 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.217871 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.218503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320177 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320345 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.320439 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.321997 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.326518 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.328568 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.340213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"openstackclient\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.389711 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.390775 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.422252 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.455328 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.457146 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.479249 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525878 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.525988 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.526040 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: E0127 18:29:09.588127 4907 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 18:29:09 crc kubenswrapper[4907]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6eadebaf-c7ae-4b1a-9917-b00dd0e25125_0(0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c" Netns:"/var/run/netns/f3d4b7f9-0fb5-4f55-b9e3-bed143bb9417" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c;K8S_POD_UID=6eadebaf-c7ae-4b1a-9917-b00dd0e25125" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6eadebaf-c7ae-4b1a-9917-b00dd0e25125]: expected pod UID "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" but got "8cea1342-da85-42e5-a54b-98b132f7871f" from Kube API Jan 27 18:29:09 crc kubenswrapper[4907]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 18:29:09 crc kubenswrapper[4907]: > Jan 27 18:29:09 crc kubenswrapper[4907]: E0127 18:29:09.588194 4907 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 18:29:09 crc kubenswrapper[4907]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6eadebaf-c7ae-4b1a-9917-b00dd0e25125_0(0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c" Netns:"/var/run/netns/f3d4b7f9-0fb5-4f55-b9e3-bed143bb9417" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=0abc3e4562c3c159c7d498ff732ec1d4e20b6518ddcf9894a741ac237407202c;K8S_POD_UID=6eadebaf-c7ae-4b1a-9917-b00dd0e25125" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6eadebaf-c7ae-4b1a-9917-b00dd0e25125]: expected pod UID "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" but got "8cea1342-da85-42e5-a54b-98b132f7871f" from Kube API Jan 27 18:29:09 crc kubenswrapper[4907]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 18:29:09 crc kubenswrapper[4907]: > pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627314 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627440 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627474 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.627528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.628204 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.631863 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.631867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea1342-da85-42e5-a54b-98b132f7871f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.646279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9xs\" (UniqueName: \"kubernetes.io/projected/8cea1342-da85-42e5-a54b-98b132f7871f-kube-api-access-8x9xs\") pod \"openstackclient\" (UID: \"8cea1342-da85-42e5-a54b-98b132f7871f\") " pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.840787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.899158 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.903447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"044115f65d6920ca496f342a6b343de4121660bd3773415df6035ea6c5f9cfd3"} Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.959939 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.959914575 podStartE2EDuration="3.959914575s" podCreationTimestamp="2026-01-27 18:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:09.947828829 +0000 UTC m=+1405.077111441" watchObservedRunningTime="2026-01-27 18:29:09.959914575 +0000 UTC m=+1405.089197187" Jan 27 18:29:09 crc kubenswrapper[4907]: I0127 18:29:09.999526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.005813 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137337 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137714 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.137768 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") pod \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\" (UID: \"6eadebaf-c7ae-4b1a-9917-b00dd0e25125\") " Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.138207 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.138835 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.143350 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.143990 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh" (OuterVolumeSpecName: "kube-api-access-7rpnh") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "kube-api-access-7rpnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.155644 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eadebaf-c7ae-4b1a-9917-b00dd0e25125" (UID: "6eadebaf-c7ae-4b1a-9917-b00dd0e25125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245379 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245417 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rpnh\" (UniqueName: \"kubernetes.io/projected/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-kube-api-access-7rpnh\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.245447 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eadebaf-c7ae-4b1a-9917-b00dd0e25125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.384534 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.827032 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.931704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8cea1342-da85-42e5-a54b-98b132f7871f","Type":"ContainerStarted","Data":"3867807b82d8a9c03c0d7533083bbdbba4c5bfc337fae8b64ab63248bc7ef586"} Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.931855 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:29:10 crc kubenswrapper[4907]: I0127 18:29:10.966642 4907 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:11 crc kubenswrapper[4907]: I0127 18:29:11.761606 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eadebaf-c7ae-4b1a-9917-b00dd0e25125" path="/var/lib/kubelet/pods/6eadebaf-c7ae-4b1a-9917-b00dd0e25125/volumes" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.253067 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.254858 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.257496 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.257765 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.260411 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-865rb" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.270084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.276463 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.343237 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.345334 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.386972 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406805 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406937 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406971 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.406990 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407035 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407140 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.407159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.408144 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.422774 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.428174 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.438638 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.441656 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.443789 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.475159 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.510973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511194 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511304 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.511471 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512271 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512369 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512451 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512577 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512841 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512914 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.512980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513176 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.513311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.514949 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.515632 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.520817 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.521223 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.521622 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.528251 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.528417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.536080 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.541632 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.545028 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.557476 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"dnsmasq-dns-7d978555f9-dwq2p\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.582433 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"heat-engine-68c4f5ddbb-hppxn\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.588401 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618755 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618928 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.618964 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619043 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619078 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.619096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.626391 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.626877 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.633296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.634725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.635600 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.642926 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.644562 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"heat-cfnapi-65c6f76446-q72qf\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.671465 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.677148 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"heat-api-868bfd5587-xkz6n\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.742901 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:12 crc kubenswrapper[4907]: I0127 18:29:12.807348 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.478457 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.494494 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.652924 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.717328 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.979052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerStarted","Data":"fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.980566 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerStarted","Data":"78bb94542d4d4bf3a6f813c5ed68ef097d08a2c24ab72fcdea247f0ea0fd3815"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.981981 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerStarted","Data":"da74bdc5a99774c3a69253cc4dea9e02dc1b8da7d8cfa07a92eca32b5b2b99df"} Jan 27 18:29:13 crc kubenswrapper[4907]: I0127 18:29:13.986691 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerStarted","Data":"52f3d532c41726df5137a369b3af84663e53079bc95d251a6728a34657806803"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.025250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerStarted","Data":"4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.025797 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.042231 4907 generic.go:334] "Generic (PLEG): container finished" podID="719784a4-cead-4054-ac6b-e7e45118be8c" containerID="838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155" exitCode=0 Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.042271 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155"} Jan 27 18:29:15 crc kubenswrapper[4907]: I0127 18:29:15.058304 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podStartSLOduration=3.058281381 podStartE2EDuration="3.058281381s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:15.04884 +0000 UTC m=+1410.178122612" watchObservedRunningTime="2026-01-27 18:29:15.058281381 +0000 UTC m=+1410.187563993" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.067873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerStarted","Data":"317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.068383 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.070090 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerStarted","Data":"db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.070161 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.072683 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerStarted","Data":"f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b"} Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.073506 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.126152 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-868bfd5587-xkz6n" podStartSLOduration=2.221826619 podStartE2EDuration="5.126132079s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="2026-01-27 18:29:13.647166756 +0000 UTC m=+1408.776449368" lastFinishedPulling="2026-01-27 18:29:16.551472216 +0000 UTC m=+1411.680754828" observedRunningTime="2026-01-27 18:29:17.091993311 +0000 UTC m=+1412.221275923" watchObservedRunningTime="2026-01-27 18:29:17.126132079 +0000 UTC m=+1412.255414691" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.127492 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podStartSLOduration=5.127485758 podStartE2EDuration="5.127485758s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:17.1177832 +0000 UTC m=+1412.247065812" watchObservedRunningTime="2026-01-27 18:29:17.127485758 +0000 UTC m=+1412.256768370" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.162202 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podStartSLOduration=2.077912046 podStartE2EDuration="5.162178242s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="2026-01-27 18:29:13.472334967 +0000 UTC m=+1408.601617579" lastFinishedPulling="2026-01-27 18:29:16.556601163 +0000 UTC m=+1411.685883775" observedRunningTime="2026-01-27 18:29:17.132688977 +0000 UTC m=+1412.261971599" watchObservedRunningTime="2026-01-27 18:29:17.162178242 +0000 UTC m=+1412.291460854" Jan 27 18:29:17 crc kubenswrapper[4907]: I0127 18:29:17.562664 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.206584 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.209843 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.220312 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.273703 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.273769 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.274032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.375799 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.375958 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376377 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.376426 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.395692 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"redhat-operators-l7bnp\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:18 crc kubenswrapper[4907]: I0127 18:29:18.534414 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.933266 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.936138 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.940955 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.941430 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.941630 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 18:29:19 crc kubenswrapper[4907]: I0127 18:29:19.958032 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064189 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064275 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064297 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.064378 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.068154 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.068515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171175 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171236 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171281 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171690 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171708 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.171747 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.173212 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-log-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.173725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb5201d-eb44-42cb-a5ab-49520cc1e741-run-httpd\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.194625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjvl\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-kube-api-access-fxjvl\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.194720 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-internal-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.195188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-config-data\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.196372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfb5201d-eb44-42cb-a5ab-49520cc1e741-etc-swift\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.196728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-public-tls-certs\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.208345 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb5201d-eb44-42cb-a5ab-49520cc1e741-combined-ca-bundle\") pod \"swift-proxy-6d47577fc9-fz5kg\" (UID: \"bfb5201d-eb44-42cb-a5ab-49520cc1e741\") " pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.263313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.284635 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.286510 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.313529 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.315017 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.365088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377698 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.377894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378435 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378473 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.378586 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.415835 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.432013 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.433927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.481152 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487676 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487717 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.487966 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488001 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488408 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488699 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488750 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488829 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.488852 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.495277 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498334 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498613 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.498764 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.499704 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.510617 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.534454 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.540236 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"heat-cfnapi-684dfbddb9-n6ljt\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.543370 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"heat-engine-575dc845-lv7nr\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591255 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591295 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591357 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.591372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.596496 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.599118 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.602254 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.616094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"heat-api-6cddcdb4d8-6v6xb\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.658108 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.672671 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:20 crc kubenswrapper[4907]: I0127 18:29:20.779028 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.197440 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c6c685b5-88m65" Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.286355 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.286681 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59cf67488d-dzx5l" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" containerID="cri-o://2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" gracePeriod=30 Jan 27 18:29:21 crc kubenswrapper[4907]: I0127 18:29:21.287277 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59cf67488d-dzx5l" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" containerID="cri-o://7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.173373 4907 generic.go:334] "Generic (PLEG): container finished" podID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerID="7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" exitCode=0 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.173472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3"} Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364014 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364313 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" containerID="cri-o://57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.364430 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" containerID="cri-o://407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" gracePeriod=30 Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.690232 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.761168 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:22 crc kubenswrapper[4907]: I0127 18:29:22.761465 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" containerID="cri-o://09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" gracePeriod=10 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.196960 4907 generic.go:334] "Generic (PLEG): container finished" podID="26ebee0c-64db-4384-9e27-95691ee28a17" containerID="57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" exitCode=143 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.197022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4"} Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.200461 4907 generic.go:334] "Generic (PLEG): container finished" podID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerID="09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" exitCode=0 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.200505 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d"} Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.484023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.484285 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" containerID="cri-o://317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" gracePeriod=60 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.501723 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.501978 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" containerID="cri-o://db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" gracePeriod=60 Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.514463 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.514691 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.528764 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.532371 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.532526 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.539500 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.550064 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.554890 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.555202 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.559576 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.561127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.573041 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.573048 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.579836 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.579951 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580019 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580118 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.580162 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.587673 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.601601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.603705 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:56772->10.217.0.216:8004: read: connection reset by peer" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682007 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682127 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682244 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682265 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682320 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682367 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.682398 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.690852 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.695698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.697112 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.700629 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.700840 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.702230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"heat-api-667f9867c-2tvqc\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784274 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784619 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784779 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.784824 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.793408 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.794354 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.794609 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.795239 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.795542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.810387 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"heat-cfnapi-6b8c4994cf-k8h5g\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.857738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:23 crc kubenswrapper[4907]: I0127 18:29:23.882749 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.961117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962785 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" containerID="cri-o://30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962830 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" containerID="cri-o://b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.962785 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" containerID="cri-o://be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.969138 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" containerID="cri-o://4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" gracePeriod=30 Jan 27 18:29:24 crc kubenswrapper[4907]: I0127 18:29:24.988356 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": EOF" Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.173963 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.254198 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" exitCode=0 Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.254472 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" exitCode=2 Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.255694 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6"} Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.256071 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f"} Jan 27 18:29:25 crc kubenswrapper[4907]: I0127 18:29:25.542014 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.269204 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" exitCode=0 Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.269283 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516"} Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.272590 4907 generic.go:334] "Generic (PLEG): container finished" podID="26ebee0c-64db-4384-9e27-95691ee28a17" containerID="407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" exitCode=0 Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.272721 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be"} Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.677675 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.196:9292/healthcheck\": dial tcp 10.217.0.196:9292: connect: connection refused" Jan 27 18:29:26 crc kubenswrapper[4907]: I0127 18:29:26.677852 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9292/healthcheck\": dial tcp 10.217.0.196:9292: connect: connection refused" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.888589 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.889226 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdhb5hd4hf9h66fh5cdhd7h566h684hb6h668h64ch5b8hd5h674h5f6h657h576h5bh57fh699h7dh85h586h57fh9ch67fhcdh686h56h5f9h59bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8x9xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(8cea1342-da85-42e5-a54b-98b132f7871f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:29:27 crc kubenswrapper[4907]: E0127 18:29:27.892568 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.378296 4907 generic.go:334] "Generic (PLEG): container finished" podID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerID="2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" exitCode=0 Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.379511 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1"} Jan 27 18:29:28 crc kubenswrapper[4907]: E0127 18:29:28.397917 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="8cea1342-da85-42e5-a54b-98b132f7871f" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.743898 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924747 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924810 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.924974 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.925114 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") pod \"90a7953e-f884-40eb-a25f-356aefbc6b83\" (UID: \"90a7953e-f884-40eb-a25f-356aefbc6b83\") " Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.931065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b" (OuterVolumeSpecName: "kube-api-access-clx4b") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "kube-api-access-clx4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.931792 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:28 crc kubenswrapper[4907]: I0127 18:29:28.986944 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-868bfd5587-xkz6n" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:56774->10.217.0.216:8004: read: connection reset by peer" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.008322 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028359 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028409 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clx4b\" (UniqueName: \"kubernetes.io/projected/90a7953e-f884-40eb-a25f-356aefbc6b83-kube-api-access-clx4b\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.028427 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.036644 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.065581 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config" (OuterVolumeSpecName: "config") pod "90a7953e-f884-40eb-a25f-356aefbc6b83" (UID: "90a7953e-f884-40eb-a25f-356aefbc6b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.129735 4907 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.129776 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/90a7953e-f884-40eb-a25f-356aefbc6b83-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.228022 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.235958 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336370 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336435 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.336473 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337188 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337889 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337920 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.337988 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.338940 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.342067 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs" (OuterVolumeSpecName: "logs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.349543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts" (OuterVolumeSpecName: "scripts") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.349625 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s" (OuterVolumeSpecName: "kube-api-access-d8p2s") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "kube-api-access-d8p2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.391293 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebee0c-64db-4384-9e27-95691ee28a17","Type":"ContainerDied","Data":"f34af67741fb75b00695d753383421cb9433dd7d9bdce1c92c63679b38072e13"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393652 4907 scope.go:117] "RemoveContainer" containerID="407e31536fe1940036e6ea2b9c37aa2d461f48d2b23b83ad52319ca807ee71be" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.393848 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.400106 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59cf67488d-dzx5l" event={"ID":"90a7953e-f884-40eb-a25f-356aefbc6b83","Type":"ContainerDied","Data":"6f4c874eb81621562484702f6ff39867afe712a969d135e16ff650053bcfbc4f"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.400482 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59cf67488d-dzx5l" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421386 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (OuterVolumeSpecName: "glance") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421720 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" event={"ID":"07c0995e-8815-4b0f-bea0-e278aca1a898","Type":"ContainerDied","Data":"9302d404bfba0eee2d8da4cca550efe55dc895f1186f50f082b7659191ac1d96"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.421805 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sksgl" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.430763 4907 generic.go:334] "Generic (PLEG): container finished" podID="e984f28b-ac80-459a-9dd3-8faa56796324" containerID="317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" exitCode=0 Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.430807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerDied","Data":"317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505"} Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.440977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.445335 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") pod \"26ebee0c-64db-4384-9e27-95691ee28a17\" (UID: \"26ebee0c-64db-4384-9e27-95691ee28a17\") " Jan 27 18:29:29 crc kubenswrapper[4907]: W0127 18:29:29.445426 4907 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/26ebee0c-64db-4384-9e27-95691ee28a17/volumes/kubernetes.io~secret/internal-tls-certs Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.445461 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446082 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.446931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.447107 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.447173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") pod \"07c0995e-8815-4b0f-bea0-e278aca1a898\" (UID: \"07c0995e-8815-4b0f-bea0-e278aca1a898\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452016 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8p2s\" (UniqueName: \"kubernetes.io/projected/26ebee0c-64db-4384-9e27-95691ee28a17-kube-api-access-d8p2s\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452056 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452069 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452103 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452120 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebee0c-64db-4384-9e27-95691ee28a17-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.452132 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.475747 4907 scope.go:117] "RemoveContainer" containerID="57c10f8dad61ce7e2df71ecf5231d40aae469c3d301f21aca43a58b66cc591b4" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.477132 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm" (OuterVolumeSpecName: "kube-api-access-dtqwm") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "kube-api-access-dtqwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.491509 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data" (OuterVolumeSpecName: "config-data") pod "26ebee0c-64db-4384-9e27-95691ee28a17" (UID: "26ebee0c-64db-4384-9e27-95691ee28a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.514308 4907 scope.go:117] "RemoveContainer" containerID="7cc697526f3fac2242634b918709487c8fe948a7cbed93c93fad4c98568461f3" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.530035 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.537073 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.535587 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.544797 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.544980 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8") on node "crc" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554018 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554173 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.554233 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.555884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") pod \"e984f28b-ac80-459a-9dd3-8faa56796324\" (UID: \"e984f28b-ac80-459a-9dd3-8faa56796324\") " Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557262 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtqwm\" (UniqueName: \"kubernetes.io/projected/07c0995e-8815-4b0f-bea0-e278aca1a898-kube-api-access-dtqwm\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557294 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557308 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebee0c-64db-4384-9e27-95691ee28a17-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.557323 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.561712 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.563216 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg" (OuterVolumeSpecName: "kube-api-access-fv4gg") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "kube-api-access-fv4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.577668 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59cf67488d-dzx5l"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.584307 4907 scope.go:117] "RemoveContainer" containerID="2bf9c7f91e2206abf55c0751131ddb9b1941ed8de1738af1ef3034eebeb54df1" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.592130 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.628833 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config" (OuterVolumeSpecName: "config") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.631411 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.653356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.657273 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "07c0995e-8815-4b0f-bea0-e278aca1a898" (UID: "07c0995e-8815-4b0f-bea0-e278aca1a898"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663691 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663729 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663743 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663757 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663768 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663778 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv4gg\" (UniqueName: \"kubernetes.io/projected/e984f28b-ac80-459a-9dd3-8faa56796324-kube-api-access-fv4gg\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663793 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c0995e-8815-4b0f-bea0-e278aca1a898-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.663911 4907 scope.go:117] "RemoveContainer" containerID="09982e56c64ef7fd6a99732b67511cc84ff7488420b5bb4c84ffe5c12f4b277d" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.681279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.698886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.703692 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data" (OuterVolumeSpecName: "config-data") pod "e984f28b-ac80-459a-9dd3-8faa56796324" (UID: "e984f28b-ac80-459a-9dd3-8faa56796324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.715089 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.718779 4907 scope.go:117] "RemoveContainer" containerID="a183293e03f2475814a7a549a40c6fac89c967734284ac6b7832b0a0bcbbcc1b" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.730508 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.766496 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e984f28b-ac80-459a-9dd3-8faa56796324-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.805193 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" path="/var/lib/kubelet/pods/90a7953e-f884-40eb-a25f-356aefbc6b83/volumes" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807490 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807602 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.807662 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d47577fc9-fz5kg"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.896903 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.940826 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.963244 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65c6f76446-q72qf" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.217:8000/healthcheck\": read tcp 10.217.0.2:40084->10.217.0.217:8000: read: connection reset by peer" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.969634 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970181 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970201 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970212 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970219 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970242 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970249 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970260 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="init" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970266 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="init" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970287 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970300 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970305 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: E0127 18:29:29.970330 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970336 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970523 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" containerName="dnsmasq-dns" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970537 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970544 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" containerName="heat-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970577 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-httpd" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970590 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" containerName="glance-log" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.970612 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a7953e-f884-40eb-a25f-356aefbc6b83" containerName="neutron-api" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.971808 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.977377 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.979572 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:29:29 crc kubenswrapper[4907]: I0127 18:29:29.985978 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.016840 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sksgl"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.057424 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.116977 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117057 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117122 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117148 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117165 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117531 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117753 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.117792 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221238 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221460 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221533 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221624 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.221661 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.222492 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.223074 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79b7035b-7e7c-40e4-86a8-d1499df47d5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230887 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.230915 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac46f52a85ef09145563fd0548ce08354897473e6fde7cb6037ea95dd6b9939/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.228754 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.237465 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.237504 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7035b-7e7c-40e4-86a8-d1499df47d5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.244399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmnw\" (UniqueName: \"kubernetes.io/projected/79b7035b-7e7c-40e4-86a8-d1499df47d5f-kube-api-access-5pmnw\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.298602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04f1bdd6-3598-45de-bdbf-4963fc1ce4e8\") pod \"glance-default-internal-api-0\" (UID: \"79b7035b-7e7c-40e4-86a8-d1499df47d5f\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.474959 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerStarted","Data":"7fc94a6ead06a9cce9ab07e0158546ba301eccc8d2ee6e6136d473c9cfe6314a"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.508511 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerStarted","Data":"331b532045c147969d3177834aceba27cb761565c843a60c7c50b5dded09e0dd"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.514646 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerStarted","Data":"932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.522123 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerStarted","Data":"982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.528523 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"1cda97c7c8fb2238d89fddf9156984f90f05b47233ef53653849986540f6e310"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.544747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"02a7ef787ad2af55aee76003ed5f2c734d79246a8b02f7fc6a11cdc00fcff410"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.550390 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.551350 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerStarted","Data":"04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.563996 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868bfd5587-xkz6n" event={"ID":"e984f28b-ac80-459a-9dd3-8faa56796324","Type":"ContainerDied","Data":"78bb94542d4d4bf3a6f813c5ed68ef097d08a2c24ab72fcdea247f0ea0fd3815"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.564049 4907 scope.go:117] "RemoveContainer" containerID="317109c7f1957f2e2a7db73ec27a682781312d90f212d54363191315d768b505" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.564088 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868bfd5587-xkz6n" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.589166 4907 generic.go:334] "Generic (PLEG): container finished" podID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerID="30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" exitCode=0 Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.589255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.630990 4907 generic.go:334] "Generic (PLEG): container finished" podID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerID="db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" exitCode=0 Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.631211 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerDied","Data":"db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233"} Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.656286 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.662587 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.672829 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-868bfd5587-xkz6n"] Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.736734 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.736905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.737000 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.737414 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") pod \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\" (UID: \"4c65559f-94dd-4b82-af1f-5d4c22c758c2\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.752628 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.755267 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9" (OuterVolumeSpecName: "kube-api-access-g4rp9") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "kube-api-access-g4rp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.760075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839165 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839225 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839254 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839318 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839513 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.839796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") pod \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\" (UID: \"fd086e93-3ba0-4f66-a848-e139b0eaaef1\") " Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.840930 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rp9\" (UniqueName: \"kubernetes.io/projected/4c65559f-94dd-4b82-af1f-5d4c22c758c2-kube-api-access-g4rp9\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.840953 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.845397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.847183 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.852767 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92" (OuterVolumeSpecName: "kube-api-access-kjd92") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "kube-api-access-kjd92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957140 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjd92\" (UniqueName: \"kubernetes.io/projected/fd086e93-3ba0-4f66-a848-e139b0eaaef1-kube-api-access-kjd92\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957184 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:30 crc kubenswrapper[4907]: I0127 18:29:30.957192 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd086e93-3ba0-4f66-a848-e139b0eaaef1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.021413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts" (OuterVolumeSpecName: "scripts") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.061323 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.315463 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.606691 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.624272 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.632768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data" (OuterVolumeSpecName: "config-data") pod "4c65559f-94dd-4b82-af1f-5d4c22c758c2" (UID: "4c65559f-94dd-4b82-af1f-5d4c22c758c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd086e93-3ba0-4f66-a848-e139b0eaaef1","Type":"ContainerDied","Data":"0dcda993740684fc7c475d953a4c78e14e7f5c8fbe1eb87431b09fbc9bf63899"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682417 4907 scope.go:117] "RemoveContainer" containerID="b21ee0689f20c14515d918c7dda8214a9c152541cbda9470661c08b982a62fb6" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.682569 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.696961 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65c6f76446-q72qf" event={"ID":"4c65559f-94dd-4b82-af1f-5d4c22c758c2","Type":"ContainerDied","Data":"da74bdc5a99774c3a69253cc4dea9e02dc1b8da7d8cfa07a92eca32b5b2b99df"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.697054 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65c6f76446-q72qf" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.700903 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.702366 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c65559f-94dd-4b82-af1f-5d4c22c758c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.702402 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.727525 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.739370 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerStarted","Data":"7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e"} Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.741582 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.773852 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-667f9867c-2tvqc" podStartSLOduration=8.77383231 podStartE2EDuration="8.77383231s" podCreationTimestamp="2026-01-27 18:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.763672911 +0000 UTC m=+1426.892955533" watchObservedRunningTime="2026-01-27 18:29:31.77383231 +0000 UTC m=+1426.903114922" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.775869 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c0995e-8815-4b0f-bea0-e278aca1a898" path="/var/lib/kubelet/pods/07c0995e-8815-4b0f-bea0-e278aca1a898/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.777377 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ebee0c-64db-4384-9e27-95691ee28a17" path="/var/lib/kubelet/pods/26ebee0c-64db-4384-9e27-95691ee28a17/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.778775 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e984f28b-ac80-459a-9dd3-8faa56796324" path="/var/lib/kubelet/pods/e984f28b-ac80-459a-9dd3-8faa56796324/volumes" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.804283 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.818639 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" exitCode=0 Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.824679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data" (OuterVolumeSpecName: "config-data") pod "fd086e93-3ba0-4f66-a848-e139b0eaaef1" (UID: "fd086e93-3ba0-4f66-a848-e139b0eaaef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.847341 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.854048 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podStartSLOduration=8.854026043 podStartE2EDuration="8.854026043s" podCreationTimestamp="2026-01-27 18:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.806970313 +0000 UTC m=+1426.936252935" watchObservedRunningTime="2026-01-27 18:29:31.854026043 +0000 UTC m=+1426.983308655" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.906352 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd086e93-3ba0-4f66-a848-e139b0eaaef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[4907]: I0127 18:29:31.915870 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podStartSLOduration=11.915851553 podStartE2EDuration="11.915851553s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:31.887921778 +0000 UTC m=+1427.017204390" watchObservedRunningTime="2026-01-27 18:29:31.915851553 +0000 UTC m=+1427.045134165" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052667 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052704 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"625904e18e2df3ac8a2e8eb7cebe332c9d8345bb145b5aa0ae9e47e979fac9f0"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerStarted","Data":"2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052740 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052760 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerStarted","Data":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052775 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerStarted","Data":"9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052791 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.052803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"db7244065a8a454343422aa0144fde80cab3ae31707a00db86ef0860f2dcd4df"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.073750 4907 scope.go:117] "RemoveContainer" containerID="be1c16ecb8b06599f8c451b8237965953ab21c1c1fa1a7ecf911f52449548e0f" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.225174 4907 scope.go:117] "RemoveContainer" containerID="30381b9e5c02b53fab3bb5b2164a15477d49d8271d2ec72a1467c4ad2b4048f9" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.232152 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.251576 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-65c6f76446-q72qf"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.272181 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.303673 4907 scope.go:117] "RemoveContainer" containerID="4c4954bde20aa461f7f624165c8484db027fd5ee67d6b3e834e2b80c68780516" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.334150 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.357813 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358782 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358800 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358820 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358826 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358861 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358866 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358883 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358888 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.358901 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.358906 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359142 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="sg-core" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359159 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-notification-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359171 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="ceilometer-central-agent" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359182 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" containerName="heat-cfnapi" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.359190 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" containerName="proxy-httpd" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.363667 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.374627 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.374879 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.394934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.407885 4907 scope.go:117] "RemoveContainer" containerID="db8e48300b56c0e9fcad75e17e4e31fdb47e3f29c5cffb60182d027a38b7c233" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433043 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433117 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433214 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433235 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.433399 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535737 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535963 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.535987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.536120 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.537653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.537978 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.548103 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.549287 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.557459 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.557473 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.559601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.654613 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.692578 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.922891 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerStarted","Data":"f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.923914 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.949290 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"1b2e5927a1dda6d6eac67be0afa08a3ea1ef91530f32d480cd6b5f5930c5f5d1"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.952100 4907 generic.go:334] "Generic (PLEG): container finished" podID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.952306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.953220 4907 scope.go:117] "RemoveContainer" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.963413 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d47577fc9-fz5kg" event={"ID":"bfb5201d-eb44-42cb-a5ab-49520cc1e741","Type":"ContainerStarted","Data":"3f7d89cf947f8a39ad3e952a39bb5ca4a60f28a71f4f01eac307c7a42fe3c341"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.963658 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.964082 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.985279 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-575dc845-lv7nr" podStartSLOduration=12.98525956 podStartE2EDuration="12.98525956s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:32.941818383 +0000 UTC m=+1428.071100995" watchObservedRunningTime="2026-01-27 18:29:32.98525956 +0000 UTC m=+1428.114542172" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989474 4907 generic.go:334] "Generic (PLEG): container finished" podID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989677 4907 generic.go:334] "Generic (PLEG): container finished" podID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" exitCode=1 Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989757 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989801 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916"} Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.989935 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:32 crc kubenswrapper[4907]: I0127 18:29:32.990463 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:32 crc kubenswrapper[4907]: E0127 18:29:32.990798 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.025624 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d47577fc9-fz5kg" podStartSLOduration=14.025602547 podStartE2EDuration="14.025602547s" podCreationTimestamp="2026-01-27 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:33.017172567 +0000 UTC m=+1428.146455179" watchObservedRunningTime="2026-01-27 18:29:33.025602547 +0000 UTC m=+1428.154885159" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.150873 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.151494 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" containerID="cri-o://1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" gracePeriod=30 Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.151801 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" containerID="cri-o://584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" gracePeriod=30 Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.159698 4907 scope.go:117] "RemoveContainer" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:33 crc kubenswrapper[4907]: E0127 18:29:33.163465 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": container with ID starting with 45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7 not found: ID does not exist" containerID="45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.163525 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7"} err="failed to get container status \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": rpc error: code = NotFound desc = could not find container \"45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7\": container with ID starting with 45f56b39742d479db7712c636f13bf787b3ae750742cebef9935c6892da321b7 not found: ID does not exist" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.287486 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:33 crc kubenswrapper[4907]: W0127 18:29:33.289644 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509 WatchSource:0}: Error finding container ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509: Status 404 returned error can't find the container with id ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509 Jan 27 18:29:33 crc kubenswrapper[4907]: E0127 18:29:33.464222 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-conmon-1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc45c3a_8ebc_47ae_b823_b3013e4ea0df.slice/crio-1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.767906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c65559f-94dd-4b82-af1f-5d4c22c758c2" path="/var/lib/kubelet/pods/4c65559f-94dd-4b82-af1f-5d4c22c758c2/volumes" Jan 27 18:29:33 crc kubenswrapper[4907]: I0127 18:29:33.768620 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd086e93-3ba0-4f66-a848-e139b0eaaef1" path="/var/lib/kubelet/pods/fd086e93-3ba0-4f66-a848-e139b0eaaef1/volumes" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011616 4907 generic.go:334] "Generic (PLEG): container finished" podID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" exitCode=1 Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.011994 4907 scope.go:117] "RemoveContainer" containerID="9f2141957831da60fe06df1f7176d2e5a6ec6247a8839d9294940af0d0ce5294" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.015627 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:34 crc kubenswrapper[4907]: E0127 18:29:34.016384 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.028049 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.036201 4907 generic.go:334] "Generic (PLEG): container finished" podID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerID="1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" exitCode=143 Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.036279 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431"} Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.040746 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:34 crc kubenswrapper[4907]: E0127 18:29:34.044376 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:34 crc kubenswrapper[4907]: I0127 18:29:34.051717 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.063674 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.067599 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"79b7035b-7e7c-40e4-86a8-d1499df47d5f","Type":"ContainerStarted","Data":"84e4d822154dac7b667eb71c0da0cc01f2e6ff2ccaf638c661063b7f1a73168b"} Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.070283 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:35 crc kubenswrapper[4907]: E0127 18:29:35.070569 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.104101 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.104085163 podStartE2EDuration="6.104085163s" podCreationTimestamp="2026-01-27 18:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:35.091986329 +0000 UTC m=+1430.221268951" watchObservedRunningTime="2026-01-27 18:29:35.104085163 +0000 UTC m=+1430.233367775" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.673431 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.673488 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.674320 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:35 crc kubenswrapper[4907]: E0127 18:29:35.674685 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-684dfbddb9-n6ljt_openstack(356365d4-834b-4980-96b4-9640bc0e2ed1)\"" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.779426 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:35 crc kubenswrapper[4907]: I0127 18:29:35.779485 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:36 crc kubenswrapper[4907]: I0127 18:29:36.085180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} Jan 27 18:29:36 crc kubenswrapper[4907]: I0127 18:29:36.086139 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:36 crc kubenswrapper[4907]: E0127 18:29:36.086423 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6cddcdb4d8-6v6xb_openstack(d8e647e4-32a6-4b4f-a082-3d1ff013a6d8)\"" pod="openstack/heat-api-6cddcdb4d8-6v6xb" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.109542 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.116169 4907 generic.go:334] "Generic (PLEG): container finished" podID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerID="584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" exitCode=0 Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.116214 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536"} Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.546816 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607184 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607540 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.607570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608447 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608852 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.608971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.609096 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") pod \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\" (UID: \"edc45c3a-8ebc-47ae-b823-b3013e4ea0df\") " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.610468 4907 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.610973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs" (OuterVolumeSpecName: "logs") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.620786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts" (OuterVolumeSpecName: "scripts") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.624122 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s" (OuterVolumeSpecName: "kube-api-access-jdj7s") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "kube-api-access-jdj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.639440 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (OuterVolumeSpecName: "glance") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.673096 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713318 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713363 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" " Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713376 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713387 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.713398 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdj7s\" (UniqueName: \"kubernetes.io/projected/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-kube-api-access-jdj7s\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.719947 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.733421 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data" (OuterVolumeSpecName: "config-data") pod "edc45c3a-8ebc-47ae-b823-b3013e4ea0df" (UID: "edc45c3a-8ebc-47ae-b823-b3013e4ea0df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.749489 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.749715 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1") on node "crc" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815752 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815782 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc45c3a-8ebc-47ae-b823-b3013e4ea0df-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.815791 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:37 crc kubenswrapper[4907]: I0127 18:29:37.942885 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138381 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edc45c3a-8ebc-47ae-b823-b3013e4ea0df","Type":"ContainerDied","Data":"a75310de648c3e38dbbf692b92cd3d98b2b70ebd876dc13e2fd06c3922f21dde"} Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138456 4907 scope.go:117] "RemoveContainer" containerID="584795c084a119985cd393053285260241d0610d4ca09fe854b1805aec5eb536" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.138452 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.175263 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.179930 4907 scope.go:117] "RemoveContainer" containerID="1ee0f45464a728d3c8b7c89d78049f812b80218fda30b0b45464599f00786431" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.194141 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.213759 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: E0127 18:29:38.214406 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214421 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: E0127 18:29:38.214468 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214480 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214767 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-log" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.214789 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" containerName="glance-httpd" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.216328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.221509 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.224760 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.225167 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329800 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329864 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329939 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.329963 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330058 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330085 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.330113 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.431616 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432201 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432775 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432813 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.432982 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.433034 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.433625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34586e59-e405-4871-9eb7-6ec0251bc992-logs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438641 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-config-data\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.438912 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.439985 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34586e59-e405-4871-9eb7-6ec0251bc992-scripts\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.453342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwsd\" (UniqueName: \"kubernetes.io/projected/34586e59-e405-4871-9eb7-6ec0251bc992-kube-api-access-7lwsd\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.457861 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.458205 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f246d510422cbc2bc7c65e4cfc4b09adee7977bbf094457002b4446ed6bccfbd/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.704123 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e42099e-761f-44d8-8535-2a3cb8d80db1\") pod \"glance-default-external-api-0\" (UID: \"34586e59-e405-4871-9eb7-6ec0251bc992\") " pod="openstack/glance-default-external-api-0" Jan 27 18:29:38 crc kubenswrapper[4907]: I0127 18:29:38.848447 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:29:39 crc kubenswrapper[4907]: I0127 18:29:39.485617 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:29:39 crc kubenswrapper[4907]: I0127 18:29:39.776069 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc45c3a-8ebc-47ae-b823-b3013e4ea0df" path="/var/lib/kubelet/pods/edc45c3a-8ebc-47ae-b823-b3013e4ea0df/volumes" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.169327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerStarted","Data":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170077 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" containerID="cri-o://c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170323 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170697 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" containerID="cri-o://ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170748 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" containerID="cri-o://742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.170782 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" containerID="cri-o://e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" gracePeriod=30 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.189485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"d7b0b81f86060110a879285f5efa521d7667a77599d2b61007026678743cd8c9"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200779 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" exitCode=0 Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200822 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.200927 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.941267337 podStartE2EDuration="8.200909381s" podCreationTimestamp="2026-01-27 18:29:32 +0000 UTC" firstStartedPulling="2026-01-27 18:29:33.299969599 +0000 UTC m=+1428.429252211" lastFinishedPulling="2026-01-27 18:29:38.559611643 +0000 UTC m=+1433.688894255" observedRunningTime="2026-01-27 18:29:40.199076679 +0000 UTC m=+1435.328359311" watchObservedRunningTime="2026-01-27 18:29:40.200909381 +0000 UTC m=+1435.330191993" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.292418 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.293097 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d47577fc9-fz5kg" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.551977 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.552340 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.609134 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.610378 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.618443 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.711474 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:40 crc kubenswrapper[4907]: I0127 18:29:40.969329 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.025501 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.235692 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"1727c8156395d08f2124e173e1673ee384795cc87b27ab131267392b0b2e82b0"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.247709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerStarted","Data":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254736 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" exitCode=0 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254766 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" exitCode=2 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.254774 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" exitCode=0 Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256221 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256234 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.256746 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.280287 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7bnp" podStartSLOduration=14.31229163 podStartE2EDuration="23.280263401s" podCreationTimestamp="2026-01-27 18:29:18 +0000 UTC" firstStartedPulling="2026-01-27 18:29:31.821484786 +0000 UTC m=+1426.950767398" lastFinishedPulling="2026-01-27 18:29:40.789456557 +0000 UTC m=+1435.918739169" observedRunningTime="2026-01-27 18:29:41.271961094 +0000 UTC m=+1436.401243706" watchObservedRunningTime="2026-01-27 18:29:41.280263401 +0000 UTC m=+1436.409546013" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.448610 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.530783 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.530911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.531021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.531053 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") pod \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\" (UID: \"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.538087 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.538271 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx" (OuterVolumeSpecName: "kube-api-access-f5phx") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "kube-api-access-f5phx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.608021 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.611832 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data" (OuterVolumeSpecName: "config-data") pod "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" (UID: "d8e647e4-32a6-4b4f-a082-3d1ff013a6d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637934 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637970 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5phx\" (UniqueName: \"kubernetes.io/projected/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-kube-api-access-f5phx\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637980 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.637988 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.720134 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.842824 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.842997 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.843052 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.843083 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") pod \"356365d4-834b-4980-96b4-9640bc0e2ed1\" (UID: \"356365d4-834b-4980-96b4-9640bc0e2ed1\") " Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.846955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.847543 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf" (OuterVolumeSpecName: "kube-api-access-z2bzf") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "kube-api-access-z2bzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.878206 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.915289 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data" (OuterVolumeSpecName: "config-data") pod "356365d4-834b-4980-96b4-9640bc0e2ed1" (UID: "356365d4-834b-4980-96b4-9640bc0e2ed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946744 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946778 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bzf\" (UniqueName: \"kubernetes.io/projected/356365d4-834b-4980-96b4-9640bc0e2ed1-kube-api-access-z2bzf\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946789 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:41 crc kubenswrapper[4907]: I0127 18:29:41.946797 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356365d4-834b-4980-96b4-9640bc0e2ed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265340 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cddcdb4d8-6v6xb" event={"ID":"d8e647e4-32a6-4b4f-a082-3d1ff013a6d8","Type":"ContainerDied","Data":"331b532045c147969d3177834aceba27cb761565c843a60c7c50b5dded09e0dd"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265355 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cddcdb4d8-6v6xb" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.265779 4907 scope.go:117] "RemoveContainer" containerID="6ee1a373506b2509644f4b4bdbb1c722e8fa7e88572f88df500a39bca4a00f38" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.267630 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34586e59-e405-4871-9eb7-6ec0251bc992","Type":"ContainerStarted","Data":"c691ffc96d61c669523c5ce2a5a0e9fa09a4b7efc048d6f710d2fcb93c5c6cc5"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.273689 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.277714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-684dfbddb9-n6ljt" event={"ID":"356365d4-834b-4980-96b4-9640bc0e2ed1","Type":"ContainerDied","Data":"7fc94a6ead06a9cce9ab07e0158546ba301eccc8d2ee6e6136d473c9cfe6314a"} Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.308309 4907 scope.go:117] "RemoveContainer" containerID="41e424ccee76e7ce5b85c17220e5b9d7fd0a6d99b1ffc3e6a62f7f56e586a916" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.340602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.340581429 podStartE2EDuration="4.340581429s" podCreationTimestamp="2026-01-27 18:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:42.305026487 +0000 UTC m=+1437.434309099" watchObservedRunningTime="2026-01-27 18:29:42.340581429 +0000 UTC m=+1437.469864041" Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.345866 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.358161 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6cddcdb4d8-6v6xb"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.369014 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:42 crc kubenswrapper[4907]: I0127 18:29:42.379446 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-684dfbddb9-n6ljt"] Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.291289 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.291673 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.771906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" path="/var/lib/kubelet/pods/356365d4-834b-4980-96b4-9640bc0e2ed1/volumes" Jan 27 18:29:43 crc kubenswrapper[4907]: I0127 18:29:43.773226 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" path="/var/lib/kubelet/pods/d8e647e4-32a6-4b4f-a082-3d1ff013a6d8/volumes" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.316336 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8cea1342-da85-42e5-a54b-98b132f7871f","Type":"ContainerStarted","Data":"1c52a84f09ca2f0b331b4f48f557ce63cd9376f4f95ea9c34c966e8d60264536"} Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.339109 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.564807023 podStartE2EDuration="35.339089797s" podCreationTimestamp="2026-01-27 18:29:09 +0000 UTC" firstStartedPulling="2026-01-27 18:29:10.388936326 +0000 UTC m=+1405.518218938" lastFinishedPulling="2026-01-27 18:29:43.1632191 +0000 UTC m=+1438.292501712" observedRunningTime="2026-01-27 18:29:44.330821391 +0000 UTC m=+1439.460104003" watchObservedRunningTime="2026-01-27 18:29:44.339089797 +0000 UTC m=+1439.468372409" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.510848 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.510946 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:44 crc kubenswrapper[4907]: I0127 18:29:44.520356 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506606 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506792 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: E0127 18:29:48.506922 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-conmon-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c38d81c_140e_4516_b19c_8b58d7b25c43.slice/crio-c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.535098 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.536430 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.849412 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.849732 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.899690 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.903230 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.938234 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.943619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.943988 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944212 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944376 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944509 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944621 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.944724 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") pod \"0c38d81c-140e-4516-b19c-8b58d7b25c43\" (UID: \"0c38d81c-140e-4516-b19c-8b58d7b25c43\") " Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.953248 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.953423 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.960837 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9" (OuterVolumeSpecName: "kube-api-access-dj6m9") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "kube-api-access-dj6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:48 crc kubenswrapper[4907]: I0127 18:29:48.970038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts" (OuterVolumeSpecName: "scripts") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.022751 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049163 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049200 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6m9\" (UniqueName: \"kubernetes.io/projected/0c38d81c-140e-4516-b19c-8b58d7b25c43-kube-api-access-dj6m9\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049212 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049222 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.049230 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c38d81c-140e-4516-b19c-8b58d7b25c43-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.093693 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.151162 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.181897 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data" (OuterVolumeSpecName: "config-data") pod "0c38d81c-140e-4516-b19c-8b58d7b25c43" (UID: "0c38d81c-140e-4516-b19c-8b58d7b25c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.252735 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c38d81c-140e-4516-b19c-8b58d7b25c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.367906 4907 generic.go:334] "Generic (PLEG): container finished" podID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" exitCode=0 Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.369626 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372061 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c38d81c-140e-4516-b19c-8b58d7b25c43","Type":"ContainerDied","Data":"ef97afd7808bbbe85ef86b335e241ebae055b88050624ce45bc3bcd3dc34f509"} Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372086 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372111 4907 scope.go:117] "RemoveContainer" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.372460 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.403753 4907 scope.go:117] "RemoveContainer" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.429550 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.446288 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.454473 4907 scope.go:117] "RemoveContainer" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.472659 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473274 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473291 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473314 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473335 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473344 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473365 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473372 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473408 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473416 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473442 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.473462 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473470 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473775 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-central-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473796 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473810 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473830 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="sg-core" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473846 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="proxy-httpd" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473859 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="356365d4-834b-4980-96b4-9640bc0e2ed1" containerName="heat-cfnapi" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.473871 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" containerName="ceilometer-notification-agent" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.474131 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.474144 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.474388 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e647e4-32a6-4b4f-a082-3d1ff013a6d8" containerName="heat-api" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.476378 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.484408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.490049 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.497266 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.518571 4907 scope.go:117] "RemoveContainer" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.539706 4907 scope.go:117] "RemoveContainer" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.541309 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": container with ID starting with ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257 not found: ID does not exist" containerID="ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.541341 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257"} err="failed to get container status \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": rpc error: code = NotFound desc = could not find container \"ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257\": container with ID starting with ad9897a78dfe6aae8f9e3e2d1cb492b4b02a96f402033c4fd2966ba8f5f8f257 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.541361 4907 scope.go:117] "RemoveContainer" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.544367 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": container with ID starting with 742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097 not found: ID does not exist" containerID="742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.544397 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097"} err="failed to get container status \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": rpc error: code = NotFound desc = could not find container \"742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097\": container with ID starting with 742daa4b9b80a52f80b0c9aa648f58cf97e82c03a2eec149aa2062c18c0f8097 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.544415 4907 scope.go:117] "RemoveContainer" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.547179 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": container with ID starting with e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c not found: ID does not exist" containerID="e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.547215 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c"} err="failed to get container status \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": rpc error: code = NotFound desc = could not find container \"e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c\": container with ID starting with e44073c2af66686a0a675f4d2e91aa31e8bfb78aa070f085843ea770dec7c92c not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.547235 4907 scope.go:117] "RemoveContainer" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: E0127 18:29:49.549927 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": container with ID starting with c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2 not found: ID does not exist" containerID="c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.549955 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2"} err="failed to get container status \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": rpc error: code = NotFound desc = could not find container \"c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2\": container with ID starting with c445c9c8d190c7c72ea597a6eeeb5369732077cb3bd2ec799cdb958fecd04bc2 not found: ID does not exist" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.600381 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:29:49 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:29:49 crc kubenswrapper[4907]: > Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.661799 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662069 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662290 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662523 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662691 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.662776 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.760675 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c38d81c-140e-4516-b19c-8b58d7b25c43" path="/var/lib/kubelet/pods/0c38d81c-140e-4516-b19c-8b58d7b25c43/volumes" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764818 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764906 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764939 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.764960 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765096 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.765147 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.768378 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.768446 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.776398 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.776812 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.777063 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.779186 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.800466 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"ceilometer-0\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " pod="openstack/ceilometer-0" Jan 27 18:29:49 crc kubenswrapper[4907]: I0127 18:29:49.829880 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:29:50 crc kubenswrapper[4907]: W0127 18:29:50.377775 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9edfaf34_4000_4bda_9c1f_0f4afa06325b.slice/crio-014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079 WatchSource:0}: Error finding container 014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079: Status 404 returned error can't find the container with id 014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079 Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.420886 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.697022 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.757137 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:29:50 crc kubenswrapper[4907]: I0127 18:29:50.757392 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" containerID="cri-o://4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" gracePeriod=60 Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.245890 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.248182 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.272031 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.323488 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.323655 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.340891 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.342575 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.357435 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.385116 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.387256 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.389716 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.391545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.425869 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.425930 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426091 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426131 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.426146 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.427095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.443090 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.443122 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.445287 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3"} Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.445379 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079"} Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.461974 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.463576 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.469966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"nova-api-db-create-lb6rn\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.482122 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528588 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528629 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528688 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528754 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.528872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.534660 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.534896 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.555710 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.557784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.560682 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.560706 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"nova-api-b6e2-account-create-update-fr784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.572999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"nova-cell0-db-create-nlfm6\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.578214 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.585010 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630730 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.630936 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.631019 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.632098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.650315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"nova-cell1-db-create-r6sfn\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.662999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.713594 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.735065 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.735133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.736325 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.770276 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"nova-cell0-3c7d-account-create-update-f8kts\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.801802 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.803061 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.803167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.812941 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.849437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.851305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.851528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.954464 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.954679 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.955668 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:51 crc kubenswrapper[4907]: I0127 18:29:51.975109 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:51.981258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"nova-cell1-4610-account-create-update-8lfzv\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.164647 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.347514 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.501822 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92"} Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.512329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerStarted","Data":"24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506"} Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.559547 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.559714 4907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.575884 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.602327 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.627593 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.631443 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.633705 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:29:52 crc kubenswrapper[4907]: E0127 18:29:52.633819 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-68c4f5ddbb-hppxn" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.639670 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.901099 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:29:52 crc kubenswrapper[4907]: I0127 18:29:52.949691 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.084851 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.325112 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.566187 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerStarted","Data":"34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.566655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerStarted","Data":"5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.592376 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerStarted","Data":"d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.599293 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerStarted","Data":"9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.599342 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerStarted","Data":"aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.604843 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerStarted","Data":"be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.606730 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerStarted","Data":"f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.608177 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" podStartSLOduration=2.608154991 podStartE2EDuration="2.608154991s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.5926905 +0000 UTC m=+1448.721973112" watchObservedRunningTime="2026-01-27 18:29:53.608154991 +0000 UTC m=+1448.737437603" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.629622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerStarted","Data":"0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.629672 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerStarted","Data":"03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0"} Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.678843 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-nlfm6" podStartSLOduration=2.678821113 podStartE2EDuration="2.678821113s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.641060568 +0000 UTC m=+1448.770343180" watchObservedRunningTime="2026-01-27 18:29:53.678821113 +0000 UTC m=+1448.808103725" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.700803 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" podStartSLOduration=2.7007869380000002 podStartE2EDuration="2.700786938s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.659707018 +0000 UTC m=+1448.788989640" watchObservedRunningTime="2026-01-27 18:29:53.700786938 +0000 UTC m=+1448.830069550" Jan 27 18:29:53 crc kubenswrapper[4907]: I0127 18:29:53.735167 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b6e2-account-create-update-fr784" podStartSLOduration=2.735144476 podStartE2EDuration="2.735144476s" podCreationTimestamp="2026-01-27 18:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:53.679025348 +0000 UTC m=+1448.808307960" watchObservedRunningTime="2026-01-27 18:29:53.735144476 +0000 UTC m=+1448.864427088" Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.639134 4907 generic.go:334] "Generic (PLEG): container finished" podID="1567baee-fe0b-481f-9aca-c424237d77fd" containerID="34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.639196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerDied","Data":"34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.641414 4907 generic.go:334] "Generic (PLEG): container finished" podID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerID="d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.641490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerDied","Data":"d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.643701 4907 generic.go:334] "Generic (PLEG): container finished" podID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerID="9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.643749 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerDied","Data":"9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.654037 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.660170 4907 generic.go:334] "Generic (PLEG): container finished" podID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerID="2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.660257 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerDied","Data":"2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.664227 4907 generic.go:334] "Generic (PLEG): container finished" podID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerID="5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.664323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerDied","Data":"5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69"} Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.666637 4907 generic.go:334] "Generic (PLEG): container finished" podID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerID="0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4" exitCode=0 Jan 27 18:29:54 crc kubenswrapper[4907]: I0127 18:29:54.666701 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerDied","Data":"0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.228611 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.294077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") pod \"743ace74-8ac2-43c7-807c-47379f8c50f4\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.294376 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") pod \"743ace74-8ac2-43c7-807c-47379f8c50f4\" (UID: \"743ace74-8ac2-43c7-807c-47379f8c50f4\") " Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.299797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "743ace74-8ac2-43c7-807c-47379f8c50f4" (UID: "743ace74-8ac2-43c7-807c-47379f8c50f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.300788 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg" (OuterVolumeSpecName: "kube-api-access-t9hrg") pod "743ace74-8ac2-43c7-807c-47379f8c50f4" (UID: "743ace74-8ac2-43c7-807c-47379f8c50f4"). InnerVolumeSpecName "kube-api-access-t9hrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.401055 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743ace74-8ac2-43c7-807c-47379f8c50f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.401089 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9hrg\" (UniqueName: \"kubernetes.io/projected/743ace74-8ac2-43c7-807c-47379f8c50f4-kube-api-access-t9hrg\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.677982 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lb6rn" event={"ID":"743ace74-8ac2-43c7-807c-47379f8c50f4","Type":"ContainerDied","Data":"24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.678021 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lb6rn" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.678037 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24a33a16aa309ea78d696862a5317333d8359ba8cc86e9a6e109d49fa5619506" Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.680578 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerStarted","Data":"f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561"} Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681056 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" containerID="cri-o://72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" containerID="cri-o://f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681160 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" containerID="cri-o://f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.681140 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" containerID="cri-o://6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" gracePeriod=30 Jan 27 18:29:55 crc kubenswrapper[4907]: I0127 18:29:55.721424 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.871655501 podStartE2EDuration="6.721404265s" podCreationTimestamp="2026-01-27 18:29:49 +0000 UTC" firstStartedPulling="2026-01-27 18:29:50.380751005 +0000 UTC m=+1445.510033617" lastFinishedPulling="2026-01-27 18:29:55.230499769 +0000 UTC m=+1450.359782381" observedRunningTime="2026-01-27 18:29:55.70753387 +0000 UTC m=+1450.836816482" watchObservedRunningTime="2026-01-27 18:29:55.721404265 +0000 UTC m=+1450.850686867" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.044338 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.121395 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") pod \"1567baee-fe0b-481f-9aca-c424237d77fd\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.121647 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") pod \"1567baee-fe0b-481f-9aca-c424237d77fd\" (UID: \"1567baee-fe0b-481f-9aca-c424237d77fd\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.122538 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1567baee-fe0b-481f-9aca-c424237d77fd" (UID: "1567baee-fe0b-481f-9aca-c424237d77fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.131647 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl" (OuterVolumeSpecName: "kube-api-access-q9lrl") pod "1567baee-fe0b-481f-9aca-c424237d77fd" (UID: "1567baee-fe0b-481f-9aca-c424237d77fd"). InnerVolumeSpecName "kube-api-access-q9lrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.225008 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lrl\" (UniqueName: \"kubernetes.io/projected/1567baee-fe0b-481f-9aca-c424237d77fd-kube-api-access-q9lrl\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.225084 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1567baee-fe0b-481f-9aca-c424237d77fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.443930 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.531183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") pod \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.531689 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") pod \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\" (UID: \"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.534984 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" (UID: "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.574694 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz" (OuterVolumeSpecName: "kube-api-access-grbnz") pod "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" (UID: "94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374"). InnerVolumeSpecName "kube-api-access-grbnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.653880 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.653919 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbnz\" (UniqueName: \"kubernetes.io/projected/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374-kube-api-access-grbnz\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730602 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" exitCode=2 Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730663 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" exitCode=0 Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.730780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737710 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737817 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4610-account-create-update-8lfzv" event={"ID":"1567baee-fe0b-481f-9aca-c424237d77fd","Type":"ContainerDied","Data":"5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.737851 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5541dbd94d308edde81f390f872001f17cba0e9e73c21a93408e660e769fc36a" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.740946 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nlfm6" event={"ID":"94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374","Type":"ContainerDied","Data":"aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7"} Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.740980 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab5d99cdcda2ad46d8171aae24cc9784bbff20209395c96f5c87dba8e67dac7" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.741030 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nlfm6" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.758320 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.767190 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.787808 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.861943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") pod \"9fd63a47-2bbf-455b-8732-8d489507a2a0\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") pod \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862190 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") pod \"22bda35b-bb7e-40c5-a263-56fdb4a28784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862314 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") pod \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\" (UID: \"db79947d-82c1-4b66-8f0d-d34b96ff9a16\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862392 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") pod \"9fd63a47-2bbf-455b-8732-8d489507a2a0\" (UID: \"9fd63a47-2bbf-455b-8732-8d489507a2a0\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.862522 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") pod \"22bda35b-bb7e-40c5-a263-56fdb4a28784\" (UID: \"22bda35b-bb7e-40c5-a263-56fdb4a28784\") " Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.865913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db79947d-82c1-4b66-8f0d-d34b96ff9a16" (UID: "db79947d-82c1-4b66-8f0d-d34b96ff9a16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.866401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fd63a47-2bbf-455b-8732-8d489507a2a0" (UID: "9fd63a47-2bbf-455b-8732-8d489507a2a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.866774 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22bda35b-bb7e-40c5-a263-56fdb4a28784" (UID: "22bda35b-bb7e-40c5-a263-56fdb4a28784"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.869469 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv" (OuterVolumeSpecName: "kube-api-access-s45fv") pod "9fd63a47-2bbf-455b-8732-8d489507a2a0" (UID: "9fd63a47-2bbf-455b-8732-8d489507a2a0"). InnerVolumeSpecName "kube-api-access-s45fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.870869 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8" (OuterVolumeSpecName: "kube-api-access-628s8") pod "db79947d-82c1-4b66-8f0d-d34b96ff9a16" (UID: "db79947d-82c1-4b66-8f0d-d34b96ff9a16"). InnerVolumeSpecName "kube-api-access-628s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.874233 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv" (OuterVolumeSpecName: "kube-api-access-tlxbv") pod "22bda35b-bb7e-40c5-a263-56fdb4a28784" (UID: "22bda35b-bb7e-40c5-a263-56fdb4a28784"). InnerVolumeSpecName "kube-api-access-tlxbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966627 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlxbv\" (UniqueName: \"kubernetes.io/projected/22bda35b-bb7e-40c5-a263-56fdb4a28784-kube-api-access-tlxbv\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966659 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45fv\" (UniqueName: \"kubernetes.io/projected/9fd63a47-2bbf-455b-8732-8d489507a2a0-kube-api-access-s45fv\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966679 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db79947d-82c1-4b66-8f0d-d34b96ff9a16-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966688 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bda35b-bb7e-40c5-a263-56fdb4a28784-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966698 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-628s8\" (UniqueName: \"kubernetes.io/projected/db79947d-82c1-4b66-8f0d-d34b96ff9a16-kube-api-access-628s8\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:56 crc kubenswrapper[4907]: I0127 18:29:56.966707 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd63a47-2bbf-455b-8732-8d489507a2a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.762066 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r6sfn" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.764358 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.766592 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b6e2-account-create-update-fr784" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.824624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r6sfn" event={"ID":"9fd63a47-2bbf-455b-8732-8d489507a2a0","Type":"ContainerDied","Data":"be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.824895 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1e6ee56a701e7afc91a56a35c63d6d376cdacec6044e3996ca37c93df0370c" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825043 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c7d-account-create-update-f8kts" event={"ID":"db79947d-82c1-4b66-8f0d-d34b96ff9a16","Type":"ContainerDied","Data":"f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825603 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f849abd8948b412736e8e19289e2086ed2d13f3ca63f03d7d2ab7d0155250361" Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b6e2-account-create-update-fr784" event={"ID":"22bda35b-bb7e-40c5-a263-56fdb4a28784","Type":"ContainerDied","Data":"03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0"} Jan 27 18:29:57 crc kubenswrapper[4907]: I0127 18:29:57.825786 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a3eedcd78cad026e71366f39357faad9378a19c7692cf37d1bdcb55446cae0" Jan 27 18:29:59 crc kubenswrapper[4907]: I0127 18:29:59.626849 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:29:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:29:59 crc kubenswrapper[4907]: > Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162092 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162603 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162621 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162665 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162674 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162691 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162698 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162715 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162740 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162769 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162778 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: E0127 18:30:00.162793 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.162800 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163061 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163090 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163099 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" containerName="mariadb-account-create-update" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163109 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163126 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.163132 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" containerName="mariadb-database-create" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.164071 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.166529 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.166649 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.178136 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.249807 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.249962 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.250832 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352540 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352630 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.352715 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.353590 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.358169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.373273 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"collect-profiles-29492310-h8pgc\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.489106 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.835220 4907 generic.go:334] "Generic (PLEG): container finished" podID="c735324d-bfae-4fc6-bde7-081be56ed371" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" exitCode=0 Jan 27 18:30:00 crc kubenswrapper[4907]: I0127 18:30:00.835285 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerDied","Data":"4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.014319 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 18:30:01 crc kubenswrapper[4907]: W0127 18:30:01.035505 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode18727dc_e815_4722_bbce_4bfe5a8ee4f2.slice/crio-6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f WatchSource:0}: Error finding container 6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f: Status 404 returned error can't find the container with id 6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.125745 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277662 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277813 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277853 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.277992 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") pod \"c735324d-bfae-4fc6-bde7-081be56ed371\" (UID: \"c735324d-bfae-4fc6-bde7-081be56ed371\") " Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.328899 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs" (OuterVolumeSpecName: "kube-api-access-krdxs") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "kube-api-access-krdxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.328946 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.348843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382633 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382667 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdxs\" (UniqueName: \"kubernetes.io/projected/c735324d-bfae-4fc6-bde7-081be56ed371-kube-api-access-krdxs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.382679 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.391687 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data" (OuterVolumeSpecName: "config-data") pod "c735324d-bfae-4fc6-bde7-081be56ed371" (UID: "c735324d-bfae-4fc6-bde7-081be56ed371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.484710 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c735324d-bfae-4fc6-bde7-081be56ed371-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.857904 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c4f5ddbb-hppxn" event={"ID":"c735324d-bfae-4fc6-bde7-081be56ed371","Type":"ContainerDied","Data":"52f3d532c41726df5137a369b3af84663e53079bc95d251a6728a34657806803"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.857980 4907 scope.go:117] "RemoveContainer" containerID="4818361f79b51a22513a2f49cb930c17d0d6847ec06dd399c5ceb8171696e7df" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.858234 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c4f5ddbb-hppxn" Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.861858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerStarted","Data":"ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.861913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerStarted","Data":"6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f"} Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.961392 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:30:01 crc kubenswrapper[4907]: I0127 18:30:01.973767 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-68c4f5ddbb-hppxn"] Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.022763 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:02 crc kubenswrapper[4907]: E0127 18:30:02.023356 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.023381 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.023679 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" containerName="heat-engine" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.024663 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.026818 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.027483 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mp6sz" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.027614 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.039170 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101367 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101418 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101609 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.101741 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205686 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205831 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.205973 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.206001 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.211109 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.215142 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.215164 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.229050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"nova-cell0-conductor-db-sync-nfn2m\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.342389 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:02 crc kubenswrapper[4907]: W0127 18:30:02.868330 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0230a81d_2f87_4ad2_a9b5_19cfd369f0b4.slice/crio-81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69 WatchSource:0}: Error finding container 81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69: Status 404 returned error can't find the container with id 81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.886389 4907 generic.go:334] "Generic (PLEG): container finished" podID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerID="ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3" exitCode=0 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.886659 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerDied","Data":"ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3"} Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.892698 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" exitCode=0 Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.892772 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3"} Jan 27 18:30:02 crc kubenswrapper[4907]: I0127 18:30:02.909670 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.333902 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450409 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450757 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.450878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") pod \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\" (UID: \"e18727dc-e815-4722-bbce-4bfe5a8ee4f2\") " Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.451754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.472046 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p" (OuterVolumeSpecName: "kube-api-access-vq28p") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "kube-api-access-vq28p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.501905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e18727dc-e815-4722-bbce-4bfe5a8ee4f2" (UID: "e18727dc-e815-4722-bbce-4bfe5a8ee4f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553685 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq28p\" (UniqueName: \"kubernetes.io/projected/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-kube-api-access-vq28p\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553714 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.553725 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18727dc-e815-4722-bbce-4bfe5a8ee4f2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.777422 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c735324d-bfae-4fc6-bde7-081be56ed371" path="/var/lib/kubelet/pods/c735324d-bfae-4fc6-bde7-081be56ed371/volumes" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.926712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerStarted","Data":"81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69"} Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930362 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" event={"ID":"e18727dc-e815-4722-bbce-4bfe5a8ee4f2","Type":"ContainerDied","Data":"6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f"} Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930401 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6056aab27eb29d98eb45f85444932ce70b9653d8ae5818b64c5146844afea18f" Jan 27 18:30:03 crc kubenswrapper[4907]: I0127 18:30:03.930455 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc" Jan 27 18:30:09 crc kubenswrapper[4907]: I0127 18:30:09.585258 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:30:09 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:30:09 crc kubenswrapper[4907]: > Jan 27 18:30:12 crc kubenswrapper[4907]: I0127 18:30:12.038248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerStarted","Data":"bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39"} Jan 27 18:30:12 crc kubenswrapper[4907]: I0127 18:30:12.071363 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" podStartSLOduration=2.421057437 podStartE2EDuration="11.071337414s" podCreationTimestamp="2026-01-27 18:30:01 +0000 UTC" firstStartedPulling="2026-01-27 18:30:02.872451958 +0000 UTC m=+1458.001734570" lastFinishedPulling="2026-01-27 18:30:11.522731935 +0000 UTC m=+1466.652014547" observedRunningTime="2026-01-27 18:30:12.053675192 +0000 UTC m=+1467.182957804" watchObservedRunningTime="2026-01-27 18:30:12.071337414 +0000 UTC m=+1467.200620046" Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.586224 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" probeResult="failure" output=< Jan 27 18:30:19 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:30:19 crc kubenswrapper[4907]: > Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.830125 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:30:19 crc kubenswrapper[4907]: I0127 18:30:19.834162 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 18:30:23 crc kubenswrapper[4907]: I0127 18:30:23.180518 4907 generic.go:334] "Generic (PLEG): container finished" podID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerID="bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39" exitCode=0 Jan 27 18:30:23 crc kubenswrapper[4907]: I0127 18:30:23.180600 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerDied","Data":"bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39"} Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.597292 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:24 crc kubenswrapper[4907]: E0127 18:30:24.598186 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.598202 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.598436 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" containerName="collect-profiles" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.599167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.613267 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.627536 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.629099 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.632434 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.651908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672027 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672417 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.672525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.675442 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.773792 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.773865 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774164 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") pod \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\" (UID: \"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4\") " Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774545 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774739 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.774786 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.776025 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.778227 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.783139 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s" (OuterVolumeSpecName: "kube-api-access-8gc6s") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "kube-api-access-8gc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.784684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts" (OuterVolumeSpecName: "scripts") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.796610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"aodh-db-create-gqf7g\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.797747 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"aodh-368c-account-create-update-vclbz\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.810761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.815226 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data" (OuterVolumeSpecName: "config-data") pod "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" (UID: "0230a81d-2f87-4ad2-a9b5-19cfd369f0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876744 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876778 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876789 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gc6s\" (UniqueName: \"kubernetes.io/projected/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-kube-api-access-8gc6s\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.876801 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.966836 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:24 crc kubenswrapper[4907]: I0127 18:30:24.975857 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221289 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" event={"ID":"0230a81d-2f87-4ad2-a9b5-19cfd369f0b4","Type":"ContainerDied","Data":"81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69"} Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221333 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81524b6170471c214b6e48077205b22e5a2edbf69393883d494eac9b75802f69" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.221398 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nfn2m" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.332365 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:25 crc kubenswrapper[4907]: E0127 18:30:25.332879 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.332901 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.333181 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" containerName="nova-cell0-conductor-db-sync" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.333924 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.336360 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mp6sz" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.336452 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.351318 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390441 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390630 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.390858 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.492943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.493326 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.493392 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.500219 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.504206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7fd860-ac95-4571-99c5-b416f9a9bae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.517270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxmt\" (UniqueName: \"kubernetes.io/projected/7a7fd860-ac95-4571-99c5-b416f9a9bae9-kube-api-access-8kxmt\") pod \"nova-cell0-conductor-0\" (UID: \"7a7fd860-ac95-4571-99c5-b416f9a9bae9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.527616 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.649760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:25 crc kubenswrapper[4907]: I0127 18:30:25.713483 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:30:25 crc kubenswrapper[4907]: W0127 18:30:25.723616 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb430d70c_f51d_4ffd_856f_4035b5d053b7.slice/crio-44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa WatchSource:0}: Error finding container 44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa: Status 404 returned error can't find the container with id 44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235217 4907 generic.go:334] "Generic (PLEG): container finished" podID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerID="bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb" exitCode=0 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235683 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerDied","Data":"bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.235715 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerStarted","Data":"c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.237803 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238874 4907 generic.go:334] "Generic (PLEG): container finished" podID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerID="f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" exitCode=137 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238947 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9edfaf34-4000-4bda-9c1f-0f4afa06325b","Type":"ContainerDied","Data":"014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.238959 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014b0a2c54edb8d7037c19521867860fb3792d1b8d5ae88075eb18294d048079" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.241414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerStarted","Data":"8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c"} Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.241454 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerStarted","Data":"44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa"} Jan 27 18:30:26 crc kubenswrapper[4907]: W0127 18:30:26.327392 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7fd860_ac95_4571_99c5_b416f9a9bae9.slice/crio-9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3 WatchSource:0}: Error finding container 9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3: Status 404 returned error can't find the container with id 9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3 Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.331295 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421296 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421365 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421416 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421502 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421577 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421679 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.421715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") pod \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\" (UID: \"9edfaf34-4000-4bda-9c1f-0f4afa06325b\") " Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.425515 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.425597 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.428816 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts" (OuterVolumeSpecName: "scripts") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.432338 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq" (OuterVolumeSpecName: "kube-api-access-mqqhq") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "kube-api-access-mqqhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.465388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.525959 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.525995 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqqhq\" (UniqueName: \"kubernetes.io/projected/9edfaf34-4000-4bda-9c1f-0f4afa06325b-kube-api-access-mqqhq\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526011 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526022 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9edfaf34-4000-4bda-9c1f-0f4afa06325b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.526033 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.543734 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.575401 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data" (OuterVolumeSpecName: "config-data") pod "9edfaf34-4000-4bda-9c1f-0f4afa06325b" (UID: "9edfaf34-4000-4bda-9c1f-0f4afa06325b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.627613 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:26 crc kubenswrapper[4907]: I0127 18:30:26.627641 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9edfaf34-4000-4bda-9c1f-0f4afa06325b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259133 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a7fd860-ac95-4571-99c5-b416f9a9bae9","Type":"ContainerStarted","Data":"7904bdbf0ee42c22d0f04ac3efc61e09adf2803e876d6362e758a18e6af589b8"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a7fd860-ac95-4571-99c5-b416f9a9bae9","Type":"ContainerStarted","Data":"9fe81599a6f19f287dc73ded5b05018d4ab7098e6a60544ad6a53df77c2c2eb3"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.259694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.261924 4907 generic.go:334] "Generic (PLEG): container finished" podID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerID="8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c" exitCode=0 Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.262301 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerDied","Data":"8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c"} Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.262410 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.280206 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.280186985 podStartE2EDuration="2.280186985s" podCreationTimestamp="2026-01-27 18:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:27.277007135 +0000 UTC m=+1482.406289757" watchObservedRunningTime="2026-01-27 18:30:27.280186985 +0000 UTC m=+1482.409469597" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.323111 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.350621 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.365549 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366374 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366422 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366450 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366460 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366506 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366516 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: E0127 18:30:27.366537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366577 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366927 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-notification-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.366954 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="proxy-httpd" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.367004 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="ceilometer-central-agent" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.367032 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" containerName="sg-core" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.370264 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.376803 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.377033 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.383957 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445709 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445767 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445802 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445830 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445862 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.445906 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.446097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548185 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548259 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548277 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548302 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548334 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.548393 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.552211 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.552279 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.554698 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.557143 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.557507 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.568735 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.588357 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"ceilometer-0\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.700784 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.797146 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9edfaf34-4000-4bda-9c1f-0f4afa06325b" path="/var/lib/kubelet/pods/9edfaf34-4000-4bda-9c1f-0f4afa06325b/volumes" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.970643 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:27 crc kubenswrapper[4907]: I0127 18:30:27.979075 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067157 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") pod \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067368 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") pod \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\" (UID: \"3cabef78-d5b3-4e61-9aa1-0f0529701fa0\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067397 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") pod \"b430d70c-f51d-4ffd-856f-4035b5d053b7\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067423 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") pod \"b430d70c-f51d-4ffd-856f-4035b5d053b7\" (UID: \"b430d70c-f51d-4ffd-856f-4035b5d053b7\") " Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067963 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cabef78-d5b3-4e61-9aa1-0f0529701fa0" (UID: "3cabef78-d5b3-4e61-9aa1-0f0529701fa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.067981 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b430d70c-f51d-4ffd-856f-4035b5d053b7" (UID: "b430d70c-f51d-4ffd-856f-4035b5d053b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.072982 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65" (OuterVolumeSpecName: "kube-api-access-jsq65") pod "3cabef78-d5b3-4e61-9aa1-0f0529701fa0" (UID: "3cabef78-d5b3-4e61-9aa1-0f0529701fa0"). InnerVolumeSpecName "kube-api-access-jsq65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.078324 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4" (OuterVolumeSpecName: "kube-api-access-xztb4") pod "b430d70c-f51d-4ffd-856f-4035b5d053b7" (UID: "b430d70c-f51d-4ffd-856f-4035b5d053b7"). InnerVolumeSpecName "kube-api-access-xztb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169906 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsq65\" (UniqueName: \"kubernetes.io/projected/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-kube-api-access-jsq65\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169942 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b430d70c-f51d-4ffd-856f-4035b5d053b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169955 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xztb4\" (UniqueName: \"kubernetes.io/projected/b430d70c-f51d-4ffd-856f-4035b5d053b7-kube-api-access-xztb4\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.169964 4907 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cabef78-d5b3-4e61-9aa1-0f0529701fa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.232888 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:28 crc kubenswrapper[4907]: W0127 18:30:28.237601 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b WatchSource:0}: Error finding container e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b: Status 404 returned error can't find the container with id e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.274788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-368c-account-create-update-vclbz" event={"ID":"b430d70c-f51d-4ffd-856f-4035b5d053b7","Type":"ContainerDied","Data":"44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.275945 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d3e733064b38bcb48f9e2fb82983e38b291d4d9cfd983c954b7bbb69272efa" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.274837 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-368c-account-create-update-vclbz" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.276808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gqf7g" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gqf7g" event={"ID":"3cabef78-d5b3-4e61-9aa1-0f0529701fa0","Type":"ContainerDied","Data":"c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97"} Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.278709 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4748e7778f96438dd6a8c4e757aeb996d6e8042073a5584b36550fcaef4ce97" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.654707 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.703154 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:28 crc kubenswrapper[4907]: I0127 18:30:28.893667 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.291531 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7"} Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.924372 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:29 crc kubenswrapper[4907]: E0127 18:30:29.925290 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: E0127 18:30:29.925360 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925368 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925647 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" containerName="mariadb-account-create-update" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.925677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" containerName="mariadb-database-create" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.926536 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.928868 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.930744 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.930999 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.931372 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:30:29 crc kubenswrapper[4907]: I0127 18:30:29.937781 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.049919 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.049990 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.050032 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.050169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.153427 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.158388 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.166111 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.174169 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.180267 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.193757 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"aodh-db-sync-lvm8r\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.247633 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.314746 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7bnp" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" containerID="cri-o://317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" gracePeriod=2 Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.315052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f"} Jan 27 18:30:30 crc kubenswrapper[4907]: I0127 18:30:30.846821 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.076405 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.192949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193108 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193175 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") pod \"32a7503d-bec1-4b22-a132-abaa924af073\" (UID: \"32a7503d-bec1-4b22-a132-abaa924af073\") " Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.193813 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities" (OuterVolumeSpecName: "utilities") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.201210 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj" (OuterVolumeSpecName: "kube-api-access-2xjmj") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "kube-api-access-2xjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.295759 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.295793 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xjmj\" (UniqueName: \"kubernetes.io/projected/32a7503d-bec1-4b22-a132-abaa924af073-kube-api-access-2xjmj\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.320103 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32a7503d-bec1-4b22-a132-abaa924af073" (UID: "32a7503d-bec1-4b22-a132-abaa924af073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330307 4907 generic.go:334] "Generic (PLEG): container finished" podID="32a7503d-bec1-4b22-a132-abaa924af073" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" exitCode=0 Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330403 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7bnp" event={"ID":"32a7503d-bec1-4b22-a132-abaa924af073","Type":"ContainerDied","Data":"02a7ef787ad2af55aee76003ed5f2c734d79246a8b02f7fc6a11cdc00fcff410"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330401 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7bnp" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.330490 4907 scope.go:117] "RemoveContainer" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.338075 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerStarted","Data":"75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.340811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528"} Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.367619 4907 scope.go:117] "RemoveContainer" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.374529 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.387947 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7bnp"] Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.398072 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a7503d-bec1-4b22-a132-abaa924af073-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.402229 4907 scope.go:117] "RemoveContainer" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431195 4907 scope.go:117] "RemoveContainer" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.431791 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": container with ID starting with 317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db not found: ID does not exist" containerID="317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431826 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db"} err="failed to get container status \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": rpc error: code = NotFound desc = could not find container \"317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db\": container with ID starting with 317302c94cd03bf60a21d1049e5ea0d7f952c7ea8e2614615884b841820f50db not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.431848 4907 scope.go:117] "RemoveContainer" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.432427 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": container with ID starting with c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b not found: ID does not exist" containerID="c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432456 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b"} err="failed to get container status \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": rpc error: code = NotFound desc = could not find container \"c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b\": container with ID starting with c78ca6345ad85dbc29ea74dbac751fe426f18ce276bce03c569ea24d9daf148b not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432476 4907 scope.go:117] "RemoveContainer" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: E0127 18:30:31.432764 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": container with ID starting with 42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c not found: ID does not exist" containerID="42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.432830 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c"} err="failed to get container status \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": rpc error: code = NotFound desc = could not find container \"42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c\": container with ID starting with 42e2f3c6eb7900c322183e56b2c9b33010feeb36c90a07a1f9c4172617b6ca0c not found: ID does not exist" Jan 27 18:30:31 crc kubenswrapper[4907]: I0127 18:30:31.767319 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a7503d-bec1-4b22-a132-abaa924af073" path="/var/lib/kubelet/pods/32a7503d-bec1-4b22-a132-abaa924af073/volumes" Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.356191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerStarted","Data":"160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021"} Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.357190 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:30:32 crc kubenswrapper[4907]: I0127 18:30:32.381979 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.746838652 podStartE2EDuration="5.381960755s" podCreationTimestamp="2026-01-27 18:30:27 +0000 UTC" firstStartedPulling="2026-01-27 18:30:28.241904226 +0000 UTC m=+1483.371186838" lastFinishedPulling="2026-01-27 18:30:31.877026329 +0000 UTC m=+1487.006308941" observedRunningTime="2026-01-27 18:30:32.376547971 +0000 UTC m=+1487.505830593" watchObservedRunningTime="2026-01-27 18:30:32.381960755 +0000 UTC m=+1487.511243367" Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.398740 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerStarted","Data":"7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3"} Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.427804 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-lvm8r" podStartSLOduration=2.307887749 podStartE2EDuration="6.427780047s" podCreationTimestamp="2026-01-27 18:30:29 +0000 UTC" firstStartedPulling="2026-01-27 18:30:30.847533889 +0000 UTC m=+1485.976816511" lastFinishedPulling="2026-01-27 18:30:34.967426197 +0000 UTC m=+1490.096708809" observedRunningTime="2026-01-27 18:30:35.419512323 +0000 UTC m=+1490.548794935" watchObservedRunningTime="2026-01-27 18:30:35.427780047 +0000 UTC m=+1490.557062669" Jan 27 18:30:35 crc kubenswrapper[4907]: I0127 18:30:35.700780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.320874 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334359 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-content" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334405 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-content" Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334458 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334468 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: E0127 18:30:36.334515 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-utilities" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.334527 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="extract-utilities" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.335090 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a7503d-bec1-4b22-a132-abaa924af073" containerName="registry-server" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.339791 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.352135 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.353008 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.353039 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.423974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424026 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424160 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.424263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529435 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529757 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.529789 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.539129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.545363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.557529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.581232 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"nova-cell0-cell-mapping-749bg\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.601966 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.603760 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.615005 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.647628 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.679745 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.710937 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.712909 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.719376 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.736967 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.737190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.739503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.748918 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.801481 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.803312 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.811401 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.829729 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.841906 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.841947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842005 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842049 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842094 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.842260 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.846928 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.863176 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.875805 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.889582 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.890029 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.899152 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"nova-scheduler-0\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.909959 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.910077 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.912594 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.922275 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944234 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944288 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944320 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944389 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944411 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944446 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944485 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944526 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944594 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944618 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944651 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944724 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.944757 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.953651 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.961461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.974824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:36 crc kubenswrapper[4907]: I0127 18:30:36.982578 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " pod="openstack/nova-metadata-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056206 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056610 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056659 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.058653 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.056698 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059635 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059732 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059792 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.059888 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061719 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.061752 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.063991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.064264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.064332 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.065206 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.065254 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.066182 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.066744 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.068705 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.076602 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.080425 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.088619 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"dnsmasq-dns-7877d89589-nft4l\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.093494 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.106244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"nova-api-0\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.110896 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.116967 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.125985 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.159116 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.415895 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.438670 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.963618 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:37 crc kubenswrapper[4907]: I0127 18:30:37.985402 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.226824 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.230459 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.246294 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.282193 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.287023 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:38 crc kubenswrapper[4907]: W0127 18:30:38.293925 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice/crio-d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851 WatchSource:0}: Error finding container d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851: Status 404 returned error can't find the container with id d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851 Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.337157 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361745 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.361959 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.362120 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.375824 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.465925 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466002 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466071 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.466106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.476230 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.476642 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.481073 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.487934 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"nova-cell1-conductor-db-sync-nr6n7\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.497790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerStarted","Data":"d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.501405 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerStarted","Data":"92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.505707 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.514022 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerStarted","Data":"1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.514058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerStarted","Data":"339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.518900 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b"} Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.555997 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-749bg" podStartSLOduration=2.555974745 podStartE2EDuration="2.555974745s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:38.531975497 +0000 UTC m=+1493.661258109" watchObservedRunningTime="2026-01-27 18:30:38.555974745 +0000 UTC m=+1493.685257347" Jan 27 18:30:38 crc kubenswrapper[4907]: W0127 18:30:38.652505 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a WatchSource:0}: Error finding container 006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a: Status 404 returned error can't find the container with id 006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.662630 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:38 crc kubenswrapper[4907]: I0127 18:30:38.687524 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.377609 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.556875 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerStarted","Data":"006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.564686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerStarted","Data":"53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.572698 4907 generic.go:334] "Generic (PLEG): container finished" podID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerID="c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.572770 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.609597 4907 generic.go:334] "Generic (PLEG): container finished" podID="c16f7a68-05a6-494f-94ce-1774118b0592" containerID="7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.609668 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerDied","Data":"7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3"} Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.875509 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:39 crc kubenswrapper[4907]: I0127 18:30:39.912267 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.638735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerStarted","Data":"1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3"} Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.643410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerStarted","Data":"1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e"} Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.687984 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" podStartSLOduration=2.687959369 podStartE2EDuration="2.687959369s" podCreationTimestamp="2026-01-27 18:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:40.654906775 +0000 UTC m=+1495.784189387" watchObservedRunningTime="2026-01-27 18:30:40.687959369 +0000 UTC m=+1495.817241981" Jan 27 18:30:40 crc kubenswrapper[4907]: I0127 18:30:40.703933 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podStartSLOduration=4.70391004 podStartE2EDuration="4.70391004s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:40.68549661 +0000 UTC m=+1495.814779222" watchObservedRunningTime="2026-01-27 18:30:40.70391004 +0000 UTC m=+1495.833192652" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.236977 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354474 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354615 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.354636 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") pod \"c16f7a68-05a6-494f-94ce-1774118b0592\" (UID: \"c16f7a68-05a6-494f-94ce-1774118b0592\") " Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.367777 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g" (OuterVolumeSpecName: "kube-api-access-7928g") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "kube-api-access-7928g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.377348 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts" (OuterVolumeSpecName: "scripts") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.403021 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.403481 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data" (OuterVolumeSpecName: "config-data") pod "c16f7a68-05a6-494f-94ce-1774118b0592" (UID: "c16f7a68-05a6-494f-94ce-1774118b0592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458423 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458451 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458461 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7928g\" (UniqueName: \"kubernetes.io/projected/c16f7a68-05a6-494f-94ce-1774118b0592-kube-api-access-7928g\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.458473 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f7a68-05a6-494f-94ce-1774118b0592-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.680753 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-lvm8r" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.683613 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-lvm8r" event={"ID":"c16f7a68-05a6-494f-94ce-1774118b0592","Type":"ContainerDied","Data":"75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4"} Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.684529 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75948b867e63aed3cefc02d5b081a1666ba2b4c5ee153c818cd47b52df0895f4" Jan 27 18:30:41 crc kubenswrapper[4907]: I0127 18:30:41.684570 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.710191 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerStarted","Data":"19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708900 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" containerID="cri-o://d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.708588 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" containerID="cri-o://19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.711254 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.711334 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerStarted","Data":"fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.715256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerStarted","Data":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.726697 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerStarted","Data":"88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd"} Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.726860 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" gracePeriod=30 Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.753602 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.009641476 podStartE2EDuration="7.753576338s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:37.992233123 +0000 UTC m=+1493.121515735" lastFinishedPulling="2026-01-27 18:30:42.736167985 +0000 UTC m=+1497.865450597" observedRunningTime="2026-01-27 18:30:43.733210993 +0000 UTC m=+1498.862493605" watchObservedRunningTime="2026-01-27 18:30:43.753576338 +0000 UTC m=+1498.882858950" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.791333 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.749368703 podStartE2EDuration="7.791312095s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:38.669439142 +0000 UTC m=+1493.798721754" lastFinishedPulling="2026-01-27 18:30:42.711382534 +0000 UTC m=+1497.840665146" observedRunningTime="2026-01-27 18:30:43.772692699 +0000 UTC m=+1498.901975401" watchObservedRunningTime="2026-01-27 18:30:43.791312095 +0000 UTC m=+1498.920594697" Jan 27 18:30:43 crc kubenswrapper[4907]: I0127 18:30:43.812760 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.098280052 podStartE2EDuration="7.812743411s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:37.996712489 +0000 UTC m=+1493.125995101" lastFinishedPulling="2026-01-27 18:30:42.711175848 +0000 UTC m=+1497.840458460" observedRunningTime="2026-01-27 18:30:43.803487989 +0000 UTC m=+1498.932770601" watchObservedRunningTime="2026-01-27 18:30:43.812743411 +0000 UTC m=+1498.942026023" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.650748 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.222068772 podStartE2EDuration="8.650730343s" podCreationTimestamp="2026-01-27 18:30:36 +0000 UTC" firstStartedPulling="2026-01-27 18:30:38.282491236 +0000 UTC m=+1493.411773848" lastFinishedPulling="2026-01-27 18:30:42.711152807 +0000 UTC m=+1497.840435419" observedRunningTime="2026-01-27 18:30:43.834153346 +0000 UTC m=+1498.963435958" watchObservedRunningTime="2026-01-27 18:30:44.650730343 +0000 UTC m=+1499.780012955" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.652956 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:44 crc kubenswrapper[4907]: E0127 18:30:44.653477 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.653496 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.653726 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" containerName="aodh-db-sync" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.655969 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664204 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.664082 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.674797 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740408 4907 generic.go:334] "Generic (PLEG): container finished" podID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerID="d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" exitCode=0 Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740436 4907 generic.go:334] "Generic (PLEG): container finished" podID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerID="19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" exitCode=143 Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740523 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2"} Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.740581 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4"} Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.757892 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758065 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758201 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.758305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.860881 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861067 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861195 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.861248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.870505 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.871377 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.887526 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.890237 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"aodh-0\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " pod="openstack/aodh-0" Jan 27 18:30:44 crc kubenswrapper[4907]: I0127 18:30:44.979320 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.588059 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680638 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680694 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680885 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.680938 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.681971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs" (OuterVolumeSpecName: "logs") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.687371 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6" (OuterVolumeSpecName: "kube-api-access-vt2z6") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "kube-api-access-vt2z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: E0127 18:30:45.722073 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data podName:4269e70c-a481-47cf-a9fe-7d9095cb4445 nodeName:}" failed. No retries permitted until 2026-01-27 18:30:46.22204387 +0000 UTC m=+1501.351326492 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445") : error deleting /var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volume-subpaths: remove /var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volume-subpaths: no such file or directory Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.727754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.774802 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783621 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4269e70c-a481-47cf-a9fe-7d9095cb4445-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783674 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.783684 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2z6\" (UniqueName: \"kubernetes.io/projected/4269e70c-a481-47cf-a9fe-7d9095cb4445-kube-api-access-vt2z6\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:45 crc kubenswrapper[4907]: W0127 18:30:45.785833 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763 WatchSource:0}: Error finding container 2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763: Status 404 returned error can't find the container with id 2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763 Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4269e70c-a481-47cf-a9fe-7d9095cb4445","Type":"ContainerDied","Data":"88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7"} Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796707 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.796733 4907 scope.go:117] "RemoveContainer" containerID="d7ba434e8d1b1559b676e4eb4519142b3b9ee545a9bab27e4946d900833703e2" Jan 27 18:30:45 crc kubenswrapper[4907]: I0127 18:30:45.852931 4907 scope.go:117] "RemoveContainer" containerID="19dad0dbd6273dd7fec057ebd67706659b21c941b57960ba57bb7ab246fd7cb4" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.295567 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") pod \"4269e70c-a481-47cf-a9fe-7d9095cb4445\" (UID: \"4269e70c-a481-47cf-a9fe-7d9095cb4445\") " Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.308615 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data" (OuterVolumeSpecName: "config-data") pod "4269e70c-a481-47cf-a9fe-7d9095cb4445" (UID: "4269e70c-a481-47cf-a9fe-7d9095cb4445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.399090 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4269e70c-a481-47cf-a9fe-7d9095cb4445-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.601343 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.654921 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.676330 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: E0127 18:30:46.678657 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.678683 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: E0127 18:30:46.678744 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.678754 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.679253 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-metadata" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.679300 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" containerName="nova-metadata-log" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.686902 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.692145 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.692238 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.698480 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.708895 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.709010 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710543 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.710603 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.794297 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a"} Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.794347 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763"} Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816395 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816508 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816805 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.816828 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.818567 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.824188 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.825106 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.837190 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:46 crc kubenswrapper[4907]: I0127 18:30:46.866611 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"nova-metadata-0\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " pod="openstack/nova-metadata-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.017902 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.067618 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.068951 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.109301 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.127683 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.165773 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.166147 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.226163 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.226442 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" containerID="cri-o://f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" gracePeriod=10 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.420333 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.645421 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.772235 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4269e70c-a481-47cf-a9fe-7d9095cb4445" path="/var/lib/kubelet/pods/4269e70c-a481-47cf-a9fe-7d9095cb4445/volumes" Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.821078 4907 generic.go:334] "Generic (PLEG): container finished" podID="719784a4-cead-4054-ac6b-e7e45118be8c" containerID="f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" exitCode=0 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.821132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.826602 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.834033 4907 generic.go:334] "Generic (PLEG): container finished" podID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerID="1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838" exitCode=0 Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.835207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerDied","Data":"1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838"} Jan 27 18:30:47 crc kubenswrapper[4907]: I0127 18:30:47.897765 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.215136 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.247852 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.248196 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354293 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354467 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354743 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.354797 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") pod \"719784a4-cead-4054-ac6b-e7e45118be8c\" (UID: \"719784a4-cead-4054-ac6b-e7e45118be8c\") " Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.365812 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd" (OuterVolumeSpecName: "kube-api-access-ds4bd") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "kube-api-access-ds4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.445355 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.446058 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config" (OuterVolumeSpecName: "config") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457506 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds4bd\" (UniqueName: \"kubernetes.io/projected/719784a4-cead-4054-ac6b-e7e45118be8c-kube-api-access-ds4bd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457538 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.457548 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.495173 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.511098 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.524940 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.530887 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "719784a4-cead-4054-ac6b-e7e45118be8c" (UID: "719784a4-cead-4054-ac6b-e7e45118be8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559474 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559503 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.559513 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719784a4-cead-4054-ac6b-e7e45118be8c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.643622 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644113 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" containerID="cri-o://b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" containerID="cri-o://160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.644387 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" containerID="cri-o://15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.645395 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" containerID="cri-o://523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" gracePeriod=30 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.654223 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": EOF" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870359 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" exitCode=0 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870390 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" exitCode=2 Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870465 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.870492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882824 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882864 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" event={"ID":"719784a4-cead-4054-ac6b-e7e45118be8c","Type":"ContainerDied","Data":"fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.882915 4907 scope.go:117] "RemoveContainer" containerID="f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b" Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.913438 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.913490 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerStarted","Data":"59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d"} Jan 27 18:30:48 crc kubenswrapper[4907]: I0127 18:30:48.938907 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.938890764 podStartE2EDuration="2.938890764s" podCreationTimestamp="2026-01-27 18:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:48.934619383 +0000 UTC m=+1504.063901995" watchObservedRunningTime="2026-01-27 18:30:48.938890764 +0000 UTC m=+1504.068173376" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.049729 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.075337 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dwq2p"] Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.641618 4907 scope.go:117] "RemoveContainer" containerID="838120c8a589e1eec6c4e9a5c93d0700a4e4ff1ee248a15cba9fab8e23320155" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.788437 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" path="/var/lib/kubelet/pods/719784a4-cead-4054-ac6b-e7e45118be8c/volumes" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.857279 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.913641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.913991 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.914021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.914089 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") pod \"73c0d1c7-cc84-4792-be06-ce4535d854f1\" (UID: \"73c0d1c7-cc84-4792-be06-ce4535d854f1\") " Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.922439 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl" (OuterVolumeSpecName: "kube-api-access-lf2fl") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "kube-api-access-lf2fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.929510 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts" (OuterVolumeSpecName: "scripts") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939517 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-749bg" event={"ID":"73c0d1c7-cc84-4792-be06-ce4535d854f1","Type":"ContainerDied","Data":"339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c"} Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939572 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.939622 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-749bg" Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.947793 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" exitCode=0 Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.948874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7"} Jan 27 18:30:49 crc kubenswrapper[4907]: I0127 18:30:49.976806 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.005757 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data" (OuterVolumeSpecName: "config-data") pod "73c0d1c7-cc84-4792-be06-ce4535d854f1" (UID: "73c0d1c7-cc84-4792-be06-ce4535d854f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017395 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017439 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017453 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0d1c7-cc84-4792-be06-ce4535d854f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.017468 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2fl\" (UniqueName: \"kubernetes.io/projected/73c0d1c7-cc84-4792-be06-ce4535d854f1-kube-api-access-lf2fl\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:50 crc kubenswrapper[4907]: I0127 18:30:50.964583 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.049779 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.050092 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" containerID="cri-o://fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.050187 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" containerID="cri-o://953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.062110 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.062295 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" containerID="cri-o://a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094320 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" containerID="cri-o://59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.094520 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" containerID="cri-o://439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" gracePeriod=30 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.981171 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.987951 4907 generic.go:334] "Generic (PLEG): container finished" podID="9af43216-6482-4024-a320-fa8855680d03" containerID="fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" exitCode=143 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.988027 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993147 4907 generic.go:334] "Generic (PLEG): container finished" podID="1037249a-76d6-42a0-8336-dc2d8b998362" containerID="439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" exitCode=0 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993184 4907 generic.go:334] "Generic (PLEG): container finished" podID="1037249a-76d6-42a0-8336-dc2d8b998362" containerID="59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" exitCode=143 Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993237 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993249 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1037249a-76d6-42a0-8336-dc2d8b998362","Type":"ContainerDied","Data":"1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff"} Jan 27 18:30:51 crc kubenswrapper[4907]: I0127 18:30:51.993259 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e16f7131fae9377e2aba255ec1b7af15c6d7f5b01871af06aa6dfe6df2514ff" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.018881 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.018950 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.024626 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063135 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063229 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063493 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063596 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") pod \"1037249a-76d6-42a0-8336-dc2d8b998362\" (UID: \"1037249a-76d6-42a0-8336-dc2d8b998362\") " Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.063817 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs" (OuterVolumeSpecName: "logs") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.064299 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037249a-76d6-42a0-8336-dc2d8b998362-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.070863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b" (OuterVolumeSpecName: "kube-api-access-gxw4b") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "kube-api-access-gxw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.071146 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.072633 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.074464 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:30:52 crc kubenswrapper[4907]: E0127 18:30:52.074509 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.106371 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data" (OuterVolumeSpecName: "config-data") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.109731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.135446 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1037249a-76d6-42a0-8336-dc2d8b998362" (UID: "1037249a-76d6-42a0-8336-dc2d8b998362"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166521 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166590 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166603 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037249a-76d6-42a0-8336-dc2d8b998362-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.166615 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxw4b\" (UniqueName: \"kubernetes.io/projected/1037249a-76d6-42a0-8336-dc2d8b998362-kube-api-access-gxw4b\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:52 crc kubenswrapper[4907]: I0127 18:30:52.675129 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d978555f9-dwq2p" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: i/o timeout" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.003924 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.512877 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.532609 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544158 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544890 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="init" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544961 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="init" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.544981 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.544989 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.545012 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545020 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: E0127 18:30:53.545043 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545053 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545390 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" containerName="nova-manage" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545422 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-metadata" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545440 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="719784a4-cead-4054-ac6b-e7e45118be8c" containerName="dnsmasq-dns" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.545455 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" containerName="nova-metadata-log" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.547229 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.550369 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.555658 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.557082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.610945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.612013 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.613647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.613777 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.614108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.716921 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717098 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.717661 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.723483 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.724601 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.725274 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.738773 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"nova-metadata-0\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " pod="openstack/nova-metadata-0" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.788044 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1037249a-76d6-42a0-8336-dc2d8b998362" path="/var/lib/kubelet/pods/1037249a-76d6-42a0-8336-dc2d8b998362/volumes" Jan 27 18:30:53 crc kubenswrapper[4907]: I0127 18:30:53.875828 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:30:54 crc kubenswrapper[4907]: I0127 18:30:54.581371 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.031897 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"d180305b0a777de2ed578a449a3d5ecb18240c9610de3f6b48a1a28bd4f905ad"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.034521 4907 generic.go:334] "Generic (PLEG): container finished" podID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerID="1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.034590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerDied","Data":"1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.039488 4907 generic.go:334] "Generic (PLEG): container finished" podID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerID="523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.039532 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.045759 4907 generic.go:334] "Generic (PLEG): container finished" podID="9af43216-6482-4024-a320-fa8855680d03" containerID="953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" exitCode=0 Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.045782 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb"} Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.254862 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.349760 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.363943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364116 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364158 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") pod \"9af43216-6482-4024-a320-fa8855680d03\" (UID: \"9af43216-6482-4024-a320-fa8855680d03\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.364662 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs" (OuterVolumeSpecName: "logs") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.365052 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43216-6482-4024-a320-fa8855680d03-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.385834 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v" (OuterVolumeSpecName: "kube-api-access-skg6v") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "kube-api-access-skg6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.445115 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.456755 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data" (OuterVolumeSpecName: "config-data") pod "9af43216-6482-4024-a320-fa8855680d03" (UID: "9af43216-6482-4024-a320-fa8855680d03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.465821 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466183 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466433 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466618 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466728 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.466842 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") pod \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\" (UID: \"43c0bea1-2042-4d24-81b3-bc7c93696fcb\") " Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467474 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467543 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skg6v\" (UniqueName: \"kubernetes.io/projected/9af43216-6482-4024-a320-fa8855680d03-kube-api-access-skg6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.467636 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43216-6482-4024-a320-fa8855680d03-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.468264 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.468790 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.470971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth" (OuterVolumeSpecName: "kube-api-access-jbpth") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "kube-api-access-jbpth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.479843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts" (OuterVolumeSpecName: "scripts") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.536768 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569585 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569627 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569638 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569652 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbpth\" (UniqueName: \"kubernetes.io/projected/43c0bea1-2042-4d24-81b3-bc7c93696fcb-kube-api-access-jbpth\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.569662 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43c0bea1-2042-4d24-81b3-bc7c93696fcb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.603175 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.616551 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data" (OuterVolumeSpecName: "config-data") pod "43c0bea1-2042-4d24-81b3-bc7c93696fcb" (UID: "43c0bea1-2042-4d24-81b3-bc7c93696fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.671803 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:55 crc kubenswrapper[4907]: I0127 18:30:55.672232 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c0bea1-2042-4d24-81b3-bc7c93696fcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.040649 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076423 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43c0bea1-2042-4d24-81b3-bc7c93696fcb","Type":"ContainerDied","Data":"e1d0051373ff4add5140f7661da5a07a35b758893b2aaad5ceeef50c9d5f6b2b"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076476 4907 scope.go:117] "RemoveContainer" containerID="160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.076761 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.089502 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af43216-6482-4024-a320-fa8855680d03","Type":"ContainerDied","Data":"e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.089648 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.093796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.093878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.094056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") pod \"45bfe136-4245-4d16-9c68-2a21136b3b9a\" (UID: \"45bfe136-4245-4d16-9c68-2a21136b3b9a\") " Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.097329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.097779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerStarted","Data":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102339 4907 generic.go:334] "Generic (PLEG): container finished" podID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" exitCode=0 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerDied","Data":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102448 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45bfe136-4245-4d16-9c68-2a21136b3b9a","Type":"ContainerDied","Data":"92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.102506 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.104244 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl" (OuterVolumeSpecName: "kube-api-access-bw6sl") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "kube-api-access-bw6sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.113914 4907 scope.go:117] "RemoveContainer" containerID="15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123494 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" containerID="cri-o://ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123727 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" containerID="cri-o://5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123855 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" containerID="cri-o://046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerStarted","Data":"d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5"} Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.138402 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.123655 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" containerID="cri-o://d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" gracePeriod=30 Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.160654 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.173893 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.174060 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data" (OuterVolumeSpecName: "config-data") pod "45bfe136-4245-4d16-9c68-2a21136b3b9a" (UID: "45bfe136-4245-4d16-9c68-2a21136b3b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205942 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205983 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bfe136-4245-4d16-9c68-2a21136b3b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.205998 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw6sl\" (UniqueName: \"kubernetes.io/projected/45bfe136-4245-4d16-9c68-2a21136b3b9a-kube-api-access-bw6sl\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.221428 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.257631 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.278113 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.278959 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.278982 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279014 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279047 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279063 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279074 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279089 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279099 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279126 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279135 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279147 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279156 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.279186 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279194 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279487 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-central-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279510 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="sg-core" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279522 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-log" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279536 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="proxy-httpd" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279618 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" containerName="ceilometer-notification-agent" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279633 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" containerName="nova-scheduler-scheduler" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.279650 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43216-6482-4024-a320-fa8855680d03" containerName="nova-api-api" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.282399 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.284177 4907 scope.go:117] "RemoveContainer" containerID="523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.289811 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.290040 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.301151 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.304034 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.306406 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.332768 4907 scope.go:117] "RemoveContainer" containerID="b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.344587 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.356972 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.361503 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.36148479 podStartE2EDuration="3.36148479s" podCreationTimestamp="2026-01-27 18:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:56.204368289 +0000 UTC m=+1511.333650901" watchObservedRunningTime="2026-01-27 18:30:56.36148479 +0000 UTC m=+1511.490767402" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.366076 4907 scope.go:117] "RemoveContainer" containerID="953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.379127 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.434081604 podStartE2EDuration="12.379103747s" podCreationTimestamp="2026-01-27 18:30:44 +0000 UTC" firstStartedPulling="2026-01-27 18:30:45.796918266 +0000 UTC m=+1500.926200878" lastFinishedPulling="2026-01-27 18:30:54.741940409 +0000 UTC m=+1509.871223021" observedRunningTime="2026-01-27 18:30:56.237349161 +0000 UTC m=+1511.366631793" watchObservedRunningTime="2026-01-27 18:30:56.379103747 +0000 UTC m=+1511.508386379" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.395469 4907 scope.go:117] "RemoveContainer" containerID="fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410044 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410080 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410110 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410137 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410177 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410238 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410257 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410271 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410305 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.410335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.424844 4907 scope.go:117] "RemoveContainer" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.464183 4907 scope.go:117] "RemoveContainer" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: E0127 18:30:56.465930 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": container with ID starting with a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646 not found: ID does not exist" containerID="a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.465969 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646"} err="failed to get container status \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": rpc error: code = NotFound desc = could not find container \"a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646\": container with ID starting with a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646 not found: ID does not exist" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.488916 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.507585 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512249 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512286 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512347 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512383 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512435 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512452 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512491 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512523 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.512566 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.513458 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.514085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.514137 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.521866 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.523884 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533010 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533066 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.533222 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.534277 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.537875 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.543183 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.549540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.550257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.552053 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.555893 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.558263 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"nova-api-0\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.558765 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614355 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.614591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.618092 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.635212 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.719334 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.724324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.725018 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.738436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.738439 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.746572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"nova-scheduler-0\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " pod="openstack/nova-scheduler-0" Jan 27 18:30:56 crc kubenswrapper[4907]: I0127 18:30:56.865202 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.061245 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.139483 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.139825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.140024 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.140203 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") pod \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\" (UID: \"8f9b4dfd-c141-4a97-9656-3f48e7a04309\") " Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" event={"ID":"8f9b4dfd-c141-4a97-9656-3f48e7a04309","Type":"ContainerDied","Data":"53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144514 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.144821 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nr6n7" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.148359 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v" (OuterVolumeSpecName: "kube-api-access-qgx2v") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "kube-api-access-qgx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.148409 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts" (OuterVolumeSpecName: "scripts") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158167 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158255 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158265 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" exitCode=0 Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158543 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.158843 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a"} Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.183393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.192384 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data" (OuterVolumeSpecName: "config-data") pod "8f9b4dfd-c141-4a97-9656-3f48e7a04309" (UID: "8f9b4dfd-c141-4a97-9656-3f48e7a04309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.215313 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245056 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgx2v\" (UniqueName: \"kubernetes.io/projected/8f9b4dfd-c141-4a97-9656-3f48e7a04309-kube-api-access-qgx2v\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245089 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245104 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.245117 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9b4dfd-c141-4a97-9656-3f48e7a04309-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.382192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.660090 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.770221 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c0bea1-2042-4d24-81b3-bc7c93696fcb" path="/var/lib/kubelet/pods/43c0bea1-2042-4d24-81b3-bc7c93696fcb/volumes" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.772004 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bfe136-4245-4d16-9c68-2a21136b3b9a" path="/var/lib/kubelet/pods/45bfe136-4245-4d16-9c68-2a21136b3b9a/volumes" Jan 27 18:30:57 crc kubenswrapper[4907]: I0127 18:30:57.772766 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af43216-6482-4024-a320-fa8855680d03" path="/var/lib/kubelet/pods/9af43216-6482-4024-a320-fa8855680d03/volumes" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.172941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerStarted","Data":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.172993 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerStarted","Data":"5489fcea5b6ae89e6b35c046631a54afc66829b16e074f7f6498d1c0a256c442"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180367 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180418 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.180430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerStarted","Data":"7c25ec383f00142113ac9300e8dffba737e591fb0b61aea496b9da56aff1e861"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.184610 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:58 crc kubenswrapper[4907]: E0127 18:30:58.185250 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.185285 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.185613 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" containerName="nova-cell1-conductor-db-sync" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.186617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"eab5edc13640e8653e78a3f8680981b7da42ad91be33abedd1cee2900fc2aa2c"} Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.186738 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.188907 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.229181 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.249727 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.249704894 podStartE2EDuration="2.249704894s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:58.195722909 +0000 UTC m=+1513.325005521" watchObservedRunningTime="2026-01-27 18:30:58.249704894 +0000 UTC m=+1513.378987516" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.264865 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.264843762 podStartE2EDuration="2.264843762s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:58.242091269 +0000 UTC m=+1513.371373881" watchObservedRunningTime="2026-01-27 18:30:58.264843762 +0000 UTC m=+1513.394126374" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303145 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.303304 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405150 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405251 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.405445 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.411844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.412010 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.424241 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbslb\" (UniqueName: \"kubernetes.io/projected/4d13569d-0cc7-4ce3-ae16-b72ef4ea170c-kube-api-access-wbslb\") pod \"nova-cell1-conductor-0\" (UID: \"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.511296 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.876917 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:58 crc kubenswrapper[4907]: I0127 18:30:58.878646 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.005473 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.200592 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} Jan 27 18:30:59 crc kubenswrapper[4907]: I0127 18:30:59.202665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c","Type":"ContainerStarted","Data":"d2f11f1cbabd63614a4b596ec577ead974caed76d8f23b4d2217116ea11f12f0"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.213417 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.213680 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.215259 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4d13569d-0cc7-4ce3-ae16-b72ef4ea170c","Type":"ContainerStarted","Data":"052885f519be471da5b8312edf245577ef787fd8e03dd0d2cd3b61536d415648"} Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.216433 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 18:31:00 crc kubenswrapper[4907]: I0127 18:31:00.233689 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.233672554 podStartE2EDuration="2.233672554s" podCreationTimestamp="2026-01-27 18:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:00.233011966 +0000 UTC m=+1515.362294578" watchObservedRunningTime="2026-01-27 18:31:00.233672554 +0000 UTC m=+1515.362955166" Jan 27 18:31:01 crc kubenswrapper[4907]: I0127 18:31:01.866163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.251160 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerStarted","Data":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.251219 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:31:02 crc kubenswrapper[4907]: I0127 18:31:02.274957 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.344565665 podStartE2EDuration="6.274933554s" podCreationTimestamp="2026-01-27 18:30:56 +0000 UTC" firstStartedPulling="2026-01-27 18:30:57.386743615 +0000 UTC m=+1512.516026227" lastFinishedPulling="2026-01-27 18:31:01.317111504 +0000 UTC m=+1516.446394116" observedRunningTime="2026-01-27 18:31:02.270697175 +0000 UTC m=+1517.399979807" watchObservedRunningTime="2026-01-27 18:31:02.274933554 +0000 UTC m=+1517.404216166" Jan 27 18:31:03 crc kubenswrapper[4907]: I0127 18:31:03.876125 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:03 crc kubenswrapper[4907]: I0127 18:31:03.876754 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:04 crc kubenswrapper[4907]: I0127 18:31:04.890797 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:04 crc kubenswrapper[4907]: I0127 18:31:04.890841 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.636431 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.639241 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.866493 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:31:06 crc kubenswrapper[4907]: I0127 18:31:06.904366 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.359341 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.717794 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:07 crc kubenswrapper[4907]: I0127 18:31:07.717867 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:08 crc kubenswrapper[4907]: I0127 18:31:08.547055 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.885023 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.887712 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.887765 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.888374 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.889674 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.890109 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:31:13 crc kubenswrapper[4907]: E0127 18:31:13.892871 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4269e70c_a481_47cf_a9fe_7d9095cb4445.slice/crio-88ea244257e06e56a19501ea1e6124f02fd538a2f4b98bed18392bb73c6da0d7\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:13 crc kubenswrapper[4907]: I0127 18:31:13.895226 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.402143 4907 generic.go:334] "Generic (PLEG): container finished" podID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerID="88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" exitCode=137 Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.402429 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerDied","Data":"88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd"} Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.403072 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1efa2c3-6982-45b0-830c-043caf2979ba","Type":"ContainerDied","Data":"006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a"} Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.403092 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.408352 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.447007 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512594 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.512983 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") pod \"d1efa2c3-6982-45b0-830c-043caf2979ba\" (UID: \"d1efa2c3-6982-45b0-830c-043caf2979ba\") " Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.525229 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2" (OuterVolumeSpecName: "kube-api-access-tdwr2") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "kube-api-access-tdwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.577144 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.577464 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data" (OuterVolumeSpecName: "config-data") pod "d1efa2c3-6982-45b0-830c-043caf2979ba" (UID: "d1efa2c3-6982-45b0-830c-043caf2979ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617377 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617616 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1efa2c3-6982-45b0-830c-043caf2979ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:14 crc kubenswrapper[4907]: I0127 18:31:14.617712 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdwr2\" (UniqueName: \"kubernetes.io/projected/d1efa2c3-6982-45b0-830c-043caf2979ba-kube-api-access-tdwr2\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.415631 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.479974 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.501741 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.528923 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: E0127 18:31:15.529759 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.529783 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.530252 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.531811 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.538511 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.538833 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.539103 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.551658 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638097 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638163 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638260 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.638401 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740571 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740762 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.740817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.745937 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746080 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746265 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.746786 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3257c75e-f45f-4166-b7ba-66c1990ac2dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.758077 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hth69\" (UniqueName: \"kubernetes.io/projected/3257c75e-f45f-4166-b7ba-66c1990ac2dc-kube-api-access-hth69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3257c75e-f45f-4166-b7ba-66c1990ac2dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.782931 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1efa2c3-6982-45b0-830c-043caf2979ba" path="/var/lib/kubelet/pods/d1efa2c3-6982-45b0-830c-043caf2979ba/volumes" Jan 27 18:31:15 crc kubenswrapper[4907]: I0127 18:31:15.856135 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.313079 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.430927 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3257c75e-f45f-4166-b7ba-66c1990ac2dc","Type":"ContainerStarted","Data":"1f6e88115e10ab374ec003fe7379ef64dabf861060c19f5c5094b89c8f7e99d3"} Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.640227 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.640758 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.643032 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: I0127 18:31:16.646393 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:31:16 crc kubenswrapper[4907]: E0127 18:31:16.757424 4907 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59d80fe13631613822e66e4572059c7d9ff67833c45668bac89afec0e92d4169/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59d80fe13631613822e66e4572059c7d9ff67833c45668bac89afec0e92d4169/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-7d978555f9-dwq2p_719784a4-cead-4054-ac6b-e7e45118be8c/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-7d978555f9-dwq2p_719784a4-cead-4054-ac6b-e7e45118be8c/dnsmasq-dns/0.log: no such file or directory Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.442093 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3257c75e-f45f-4166-b7ba-66c1990ac2dc","Type":"ContainerStarted","Data":"65ed21d8e8e267517f529987aee51b72132330ab418de56f787691031d60a43f"} Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.442628 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.454494 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.470996 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.470978829 podStartE2EDuration="2.470978829s" podCreationTimestamp="2026-01-27 18:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:17.461331447 +0000 UTC m=+1532.590614059" watchObservedRunningTime="2026-01-27 18:31:17.470978829 +0000 UTC m=+1532.600261441" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.650661 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.653064 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.685734 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794344 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794695 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794740 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.794847 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896637 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896867 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896952 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.896986 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897026 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897678 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897844 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.897857 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.898461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.899016 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.916200 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"dnsmasq-dns-6d99f6bc7f-rqfpj\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:17 crc kubenswrapper[4907]: I0127 18:31:17.983350 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:18 crc kubenswrapper[4907]: I0127 18:31:18.577826 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466203 4907 generic.go:334] "Generic (PLEG): container finished" podID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerID="f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c" exitCode=0 Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c"} Jan 27 18:31:19 crc kubenswrapper[4907]: I0127 18:31:19.466656 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerStarted","Data":"4bd8eb5f48ea3f38d33f0dd542b84168a28c90547f3c08b18a3dbbf20455e507"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.198394 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199256 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" containerID="cri-o://782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199329 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" containerID="cri-o://7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199371 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" containerID="cri-o://bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.199428 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" containerID="cri-o://171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.209017 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": EOF" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.383449 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.482854 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" exitCode=0 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.482903 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" exitCode=2 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.483005 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.483061 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.487337 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" containerID="cri-o://e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.487498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerStarted","Data":"5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900"} Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.488097 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" containerID="cri-o://0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" gracePeriod=30 Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.488471 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.510705 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" podStartSLOduration=3.510687437 podStartE2EDuration="3.510687437s" podCreationTimestamp="2026-01-27 18:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:20.510426919 +0000 UTC m=+1535.639709551" watchObservedRunningTime="2026-01-27 18:31:20.510687437 +0000 UTC m=+1535.639970049" Jan 27 18:31:20 crc kubenswrapper[4907]: I0127 18:31:20.856634 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.391621 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493257 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493306 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493408 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493461 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493619 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493668 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") pod \"5783d733-2312-484a-8fbe-7ea19d454c1a\" (UID: \"5783d733-2312-484a-8fbe-7ea19d454c1a\") " Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.493793 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.494542 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.494626 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5783d733-2312-484a-8fbe-7ea19d454c1a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.499340 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts" (OuterVolumeSpecName: "scripts") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.503791 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb" (OuterVolumeSpecName: "kube-api-access-wvdqb") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "kube-api-access-wvdqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510515 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" exitCode=0 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510590 4907 generic.go:334] "Generic (PLEG): container finished" podID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" exitCode=0 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510585 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510621 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5783d733-2312-484a-8fbe-7ea19d454c1a","Type":"ContainerDied","Data":"eab5edc13640e8653e78a3f8680981b7da42ad91be33abedd1cee2900fc2aa2c"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.510688 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.514006 4907 generic.go:334] "Generic (PLEG): container finished" podID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" exitCode=143 Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.514711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.551497 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597273 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvdqb\" (UniqueName: \"kubernetes.io/projected/5783d733-2312-484a-8fbe-7ea19d454c1a-kube-api-access-wvdqb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597311 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.597324 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.640739 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.678973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data" (OuterVolumeSpecName: "config-data") pod "5783d733-2312-484a-8fbe-7ea19d454c1a" (UID: "5783d733-2312-484a-8fbe-7ea19d454c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.699892 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.699972 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783d733-2312-484a-8fbe-7ea19d454c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.798217 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.835333 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.855341 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.867160 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.868373 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.893781 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894406 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894478 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894535 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894602 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894667 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894740 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.894835 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.894888 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895205 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="proxy-httpd" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895291 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-central-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895353 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="ceilometer-notification-agent" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.895406 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" containerName="sg-core" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.897516 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.900897 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.901175 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.919314 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.919860 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.920816 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.920856 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} err="failed to get container status \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.920882 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.921368 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.921396 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} err="failed to get container status \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.921412 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.925675 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.925714 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} err="failed to get container status \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.925739 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: E0127 18:31:21.926293 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926419 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} err="failed to get container status \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926462 4907 scope.go:117] "RemoveContainer" containerID="171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926795 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7"} err="failed to get container status \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": rpc error: code = NotFound desc = could not find container \"171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7\": container with ID starting with 171eb6dc0b7c695a3b9a85183c8eda34c2d1ba6fd8ee9e3a01e11daa8d7311e7 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.926822 4907 scope.go:117] "RemoveContainer" containerID="7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.927800 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a"} err="failed to get container status \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": rpc error: code = NotFound desc = could not find container \"7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a\": container with ID starting with 7ba0270b01d15e95a2eda5881880ba733d2c075831c48b90cae556168d843d5a not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.927826 4907 scope.go:117] "RemoveContainer" containerID="bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.928785 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2"} err="failed to get container status \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": rpc error: code = NotFound desc = could not find container \"bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2\": container with ID starting with bd346ff7a6348ef3acb15c5336355156bad762debe599f131bb43082ff1565e2 not found: ID does not exist" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.928809 4907 scope.go:117] "RemoveContainer" containerID="782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f" Jan 27 18:31:21 crc kubenswrapper[4907]: I0127 18:31:21.929971 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f"} err="failed to get container status \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": rpc error: code = NotFound desc = could not find container \"782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f\": container with ID starting with 782a245869bbabd14a0b70d5b1da48900694e6ac63e7382139de585a5565692f not found: ID does not exist" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007042 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007141 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007221 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007288 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.007466 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109296 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109386 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109450 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109481 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109593 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.109643 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.110135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.110302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113229 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.113949 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.114646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.127509 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"ceilometer-0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.230218 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.421218 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:22 crc kubenswrapper[4907]: I0127 18:31:22.724722 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.545941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6"} Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.546227 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"920949198b04d9f03e2ee16b588c61df0a8878a810066103d538193d6f3eeab6"} Jan 27 18:31:23 crc kubenswrapper[4907]: I0127 18:31:23.780920 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5783d733-2312-484a-8fbe-7ea19d454c1a" path="/var/lib/kubelet/pods/5783d733-2312-484a-8fbe-7ea19d454c1a/volumes" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.193194 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.272628 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs" (OuterVolumeSpecName: "logs") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.273174 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") pod \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\" (UID: \"3416074e-eff1-48c8-af01-f9dbc6c77a0e\") " Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.274115 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3416074e-eff1-48c8-af01-f9dbc6c77a0e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.279936 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f" (OuterVolumeSpecName: "kube-api-access-6zs8f") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "kube-api-access-6zs8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.305977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.322726 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data" (OuterVolumeSpecName: "config-data") pod "3416074e-eff1-48c8-af01-f9dbc6c77a0e" (UID: "3416074e-eff1-48c8-af01-f9dbc6c77a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376592 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376626 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zs8f\" (UniqueName: \"kubernetes.io/projected/3416074e-eff1-48c8-af01-f9dbc6c77a0e-kube-api-access-6zs8f\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.376637 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416074e-eff1-48c8-af01-f9dbc6c77a0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.558442 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560097 4907 generic.go:334] "Generic (PLEG): container finished" podID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" exitCode=0 Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560147 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560167 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560192 4907 scope.go:117] "RemoveContainer" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.560179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3416074e-eff1-48c8-af01-f9dbc6c77a0e","Type":"ContainerDied","Data":"7c25ec383f00142113ac9300e8dffba737e591fb0b61aea496b9da56aff1e861"} Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.585943 4907 scope.go:117] "RemoveContainer" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.605700 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612100 4907 scope.go:117] "RemoveContainer" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.612659 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": container with ID starting with 0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c not found: ID does not exist" containerID="0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612691 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c"} err="failed to get container status \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": rpc error: code = NotFound desc = could not find container \"0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c\": container with ID starting with 0c07d2706b97b370545399da18d0051a3746c51487a3087030f6f2effccbf43c not found: ID does not exist" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.612716 4907 scope.go:117] "RemoveContainer" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.616049 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": container with ID starting with e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df not found: ID does not exist" containerID="e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.616099 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df"} err="failed to get container status \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": rpc error: code = NotFound desc = could not find container \"e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df\": container with ID starting with e54c16192d7043b89c6ac6c067243f8c47e6cc36e83dc9bd9d1bef0c687731df not found: ID does not exist" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.623281 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.637779 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.638604 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.638628 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: E0127 18:31:24.638681 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.638688 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.639048 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-api" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.639075 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" containerName="nova-api-log" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.641018 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.648682 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.648963 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.649178 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.653610 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786217 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786614 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786772 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.786849 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.787086 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.787578 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.890660 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891121 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891193 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891232 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891263 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.891325 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.895105 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.896646 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.897039 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.897093 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.900714 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.918147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"nova-api-0\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " pod="openstack/nova-api-0" Jan 27 18:31:24 crc kubenswrapper[4907]: I0127 18:31:24.962373 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.574620 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415"} Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.589169 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.779507 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3416074e-eff1-48c8-af01-f9dbc6c77a0e" path="/var/lib/kubelet/pods/3416074e-eff1-48c8-af01-f9dbc6c77a0e/volumes" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.857831 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:25 crc kubenswrapper[4907]: I0127 18:31:25.884056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.204386 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.206274 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.208097 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.208158 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.213905 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.213960 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5.scope: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.214034 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5783d733_2312_484a_8fbe_7ea19d454c1a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5783d733_2312_484a_8fbe_7ea19d454c1a.slice: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: W0127 18:31:26.218793 4907 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3416074e_eff1_48c8_af01_f9dbc6c77a0e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3416074e_eff1_48c8_af01_f9dbc6c77a0e.slice: no such file or directory Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.485672 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-conmon-1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-conmon-a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-a1b7265fd48de0a70d1d569dc79bdd2415376f2e7e578fb0ad1ddda5cbf78646.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-conmon-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-conmon-fa5fa97efb9685dd1c7db2a09e4acea16725221f474d904729f0bd8843fe23c1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-339f6defd8a590eb51556ae52114513b7964a9fb560b2dac1ea9ffc91a505f9c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice/crio-conmon-1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3a07c_7c6e_40ed_9a0a_f1952f923616.slice/crio-ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-523048f659a18237b97cb416947fddec726a9e99732d2c72741880719562907f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0d1c7_cc84_4792_be06_ce4535d854f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-53cd4deec1055166dfe266c84a916bd926b595e3dcc201c9ff865ffeed80b231\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-e04f632c782ad650c5f21f4659e785c7460b983a1b0277db3ff9956d9ab7061b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-160bfbd8ff3d696b07a91546ee269e538de92123a1f5d507d072d96996a51021.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-conmon-f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-15de9a0dedeadeed2c3936432cdefafa1e6e44b42875901f6ff2f3beb62f8528.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bfe136_4245_4d16_9c68_2a21136b3b9a.slice/crio-92c93f931d8f51099774c248291d876431d14b5c43aa83d56753a0ac2d31f02a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c0bea1_2042_4d24_81b3_bc7c93696fcb.slice/crio-conmon-b0ecf89b35415280ed28a78077dbec5c9a90cfd8e1f7a3067489af30dd9433f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b4dfd_c141_4a97_9656_3f48e7a04309.slice/crio-1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-006028793e44c1a97367e636d50f23e1272f87c44b3f8d8441b2e73720f6592a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-f62185a38c302c1e2c4f55c6ff8d8375c06e90c4a030436e37794aaca439103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice/crio-conmon-953bd6a21f22e80790f12697c3007910185a94c9be2431db4b70b13529bc24cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1efa2c3_6982_45b0_830c_043caf2979ba.slice/crio-conmon-88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719784a4_cead_4054_ac6b_e7e45118be8c.slice/crio-fb2fc41aa6c79868126426826ea77ab0aae08150f293d25b0312a5646e2300eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43216_6482_4024_a320_fa8855680d03.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.521192 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.521579 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590785 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590825 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.590835 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerStarted","Data":"e4d6a13adca6c0e1bb31637f5bdeeb280fdc3bcb042efa87415ac3e6115add8f"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.598883 4907 generic.go:334] "Generic (PLEG): container finished" podID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerID="d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" exitCode=137 Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.599003 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5"} Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.620755 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.624964 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.624948077 podStartE2EDuration="2.624948077s" podCreationTimestamp="2026-01-27 18:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:26.609802849 +0000 UTC m=+1541.739085481" watchObservedRunningTime="2026-01-27 18:31:26.624948077 +0000 UTC m=+1541.754230689" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.627178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772676 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772724 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772824 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.772841 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") pod \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\" (UID: \"eec3a07c-7c6e-40ed-9a0a-f1952f923616\") " Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.787832 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts" (OuterVolumeSpecName: "scripts") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.800077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk" (OuterVolumeSpecName: "kube-api-access-whppk") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "kube-api-access-whppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.875599 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whppk\" (UniqueName: \"kubernetes.io/projected/eec3a07c-7c6e-40ed-9a0a-f1952f923616-kube-api-access-whppk\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.876000 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.877535 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878111 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878133 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878151 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878159 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878180 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878188 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: E0127 18:31:26.878249 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.878257 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.880982 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-listener" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881035 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-api" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881052 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-evaluator" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.881089 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" containerName="aodh-notifier" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.884321 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.900903 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.901084 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.901572 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.973607 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.980965 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981111 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981385 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.981916 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:26 crc kubenswrapper[4907]: I0127 18:31:26.982114 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.003331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data" (OuterVolumeSpecName: "config-data") pod "eec3a07c-7c6e-40ed-9a0a-f1952f923616" (UID: "eec3a07c-7c6e-40ed-9a0a-f1952f923616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084381 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084450 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.084691 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec3a07c-7c6e-40ed-9a0a-f1952f923616-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.088522 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.089165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.089835 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.101987 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"nova-cell1-cell-mapping-q8zd6\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.120095 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.614889 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eec3a07c-7c6e-40ed-9a0a-f1952f923616","Type":"ContainerDied","Data":"2d41b6a1c04472fbb2c5f35afd45971f94d6d5157fb4cddf70ad7a0f3291f763"} Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.615263 4907 scope.go:117] "RemoveContainer" containerID="d6dbd00ea15b939bc9e1a2c74e901a7874a34a11481d5248ee8b219bc89b8ee5" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.614939 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerStarted","Data":"23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516"} Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619543 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" containerID="cri-o://8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619691 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" containerID="cri-o://23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619761 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" containerID="cri-o://cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.619808 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" containerID="cri-o://d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" gracePeriod=30 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.672530 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.934195821 podStartE2EDuration="6.672508462s" podCreationTimestamp="2026-01-27 18:31:21 +0000 UTC" firstStartedPulling="2026-01-27 18:31:22.71942468 +0000 UTC m=+1537.848707292" lastFinishedPulling="2026-01-27 18:31:26.457737331 +0000 UTC m=+1541.587019933" observedRunningTime="2026-01-27 18:31:27.651868398 +0000 UTC m=+1542.781151030" watchObservedRunningTime="2026-01-27 18:31:27.672508462 +0000 UTC m=+1542.801791074" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.685441 4907 scope.go:117] "RemoveContainer" containerID="5e95cc0b0b4316adc78436ac6c627d99d2aaa26db075120089d62720b076de22" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.708950 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.724802 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: W0127 18:31:27.726061 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52256d78_f327_4af2_9452_0483ad62dea0.slice/crio-af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1 WatchSource:0}: Error finding container af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1: Status 404 returned error can't find the container with id af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1 Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.739994 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.759271 4907 scope.go:117] "RemoveContainer" containerID="046248ed8d9a9abf6adc66c7f8fc1ac0fb750db71be11fc6d07cf5ab366d9f13" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.768231 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec3a07c-7c6e-40ed-9a0a-f1952f923616" path="/var/lib/kubelet/pods/eec3a07c-7c6e-40ed-9a0a-f1952f923616/volumes" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.769786 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.775954 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.783210 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.783401 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.784989 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.785139 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.786056 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.788898 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806184 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806222 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.806381 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.807282 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.807308 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909337 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909379 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909459 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909485 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.909599 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917165 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.917341 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.918462 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.924161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.926348 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"aodh-0\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " pod="openstack/aodh-0" Jan 27 18:31:27 crc kubenswrapper[4907]: I0127 18:31:27.987205 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.063761 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.076663 4907 scope.go:117] "RemoveContainer" containerID="ea827ebb4a6c25ba9e21b4d37f520682eb5c11d4458964a25f261bd883feb99a" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.081739 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.082037 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" containerID="cri-o://1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" gracePeriod=10 Jan 27 18:31:28 crc kubenswrapper[4907]: W0127 18:31:28.659461 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c7b40d_63e2_4fbf_a59d_44c106984d76.slice/crio-5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8 WatchSource:0}: Error finding container 5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8: Status 404 returned error can't find the container with id 5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660831 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660871 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" exitCode=2 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660888 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.660978 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.661343 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.688174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerStarted","Data":"9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.688224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerStarted","Data":"af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.702659 4907 generic.go:334] "Generic (PLEG): container finished" podID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerID="1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" exitCode=0 Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.702711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e"} Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.722909 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q8zd6" podStartSLOduration=2.722890197 podStartE2EDuration="2.722890197s" podCreationTimestamp="2026-01-27 18:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:28.713861442 +0000 UTC m=+1543.843144054" watchObservedRunningTime="2026-01-27 18:31:28.722890197 +0000 UTC m=+1543.852172819" Jan 27 18:31:28 crc kubenswrapper[4907]: I0127 18:31:28.932012 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.047963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048004 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048113 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.048285 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") pod \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\" (UID: \"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6\") " Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.103782 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn" (OuterVolumeSpecName: "kube-api-access-7r7nn") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "kube-api-access-7r7nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.138863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.157217 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.158454 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7nn\" (UniqueName: \"kubernetes.io/projected/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-kube-api-access-7r7nn\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.160589 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.160748 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.187046 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config" (OuterVolumeSpecName: "config") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.198699 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.201190 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" (UID: "8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263008 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263046 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.263057 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.716655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.717856 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nft4l" event={"ID":"8dbf3816-36b8-40ed-8dc6-3faf4b571dd6","Type":"ContainerDied","Data":"d30a1110debfe208efbed5b40f2d5484a3e55ac5bae77f089255259716c05851"} Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718968 4907 scope.go:117] "RemoveContainer" containerID="1a6a6f3405ecf1b0542db525557b379db577cc838c231c76b95fb8f82594f20e" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.718849 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:31:29 crc kubenswrapper[4907]: I0127 18:31:29.951654 4907 scope.go:117] "RemoveContainer" containerID="c47423de91ef3cdc23957a64f0feb2303eae5d5532344bab60096094e88a4b1a" Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.739676 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerID="8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" exitCode=0 Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.740042 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6"} Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.743931 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff"} Jan 27 18:31:30 crc kubenswrapper[4907]: I0127 18:31:30.743965 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.242265 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338209 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338260 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338523 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338579 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338712 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.338776 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.340033 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.340772 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.345412 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq" (OuterVolumeSpecName: "kube-api-access-6h8hq") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "kube-api-access-6h8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.345738 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts" (OuterVolumeSpecName: "scripts") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.383949 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.439548 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440400 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") pod \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\" (UID: \"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0\") " Jan 27 18:31:31 crc kubenswrapper[4907]: W0127 18:31:31.440588 4907 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0/volumes/kubernetes.io~secret/combined-ca-bundle Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.440985 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441007 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441016 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441026 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8hq\" (UniqueName: \"kubernetes.io/projected/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-kube-api-access-6h8hq\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441035 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.441043 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.467050 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data" (OuterVolumeSpecName: "config-data") pod "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" (UID: "7f9601d7-af2b-4b4c-80cc-57a37df0f7f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.544034 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.761271 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f9601d7-af2b-4b4c-80cc-57a37df0f7f0","Type":"ContainerDied","Data":"920949198b04d9f03e2ee16b588c61df0a8878a810066103d538193d6f3eeab6"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775457 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerStarted","Data":"d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad"} Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.775487 4907 scope.go:117] "RemoveContainer" containerID="23102149ebcf564713aeb5d821044122ef3dd6c243f7d92d44d100330d367516" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.819592 4907 scope.go:117] "RemoveContainer" containerID="cc48970436a23d36107729df6a28ff2694333a5023e59e9cfbb16739393b1415" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.831519 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.433306303 podStartE2EDuration="4.831479211s" podCreationTimestamp="2026-01-27 18:31:27 +0000 UTC" firstStartedPulling="2026-01-27 18:31:28.669743325 +0000 UTC m=+1543.799025937" lastFinishedPulling="2026-01-27 18:31:31.067916233 +0000 UTC m=+1546.197198845" observedRunningTime="2026-01-27 18:31:31.792434958 +0000 UTC m=+1546.921717570" watchObservedRunningTime="2026-01-27 18:31:31.831479211 +0000 UTC m=+1546.960761833" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.884017 4907 scope.go:117] "RemoveContainer" containerID="d8442c4901393f0e560356a3ebfb671914b2a645cd92b4cb32371b0118504dca" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.888430 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.908473 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.921309 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922086 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922104 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922137 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922146 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922174 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="init" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922182 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="init" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922227 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922235 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922252 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922261 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: E0127 18:31:31.922281 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922288 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922600 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-notification-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922633 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" containerName="dnsmasq-dns" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922648 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="sg-core" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922658 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="ceilometer-central-agent" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.922677 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" containerName="proxy-httpd" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.925283 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.929160 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.929960 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.944877 4907 scope.go:117] "RemoveContainer" containerID="8f4db74e631f07d1a8028e659fbb70c4e579b183c47622db7e00b59be6c8e0e6" Jan 27 18:31:31 crc kubenswrapper[4907]: I0127 18:31:31.947033 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.056687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057161 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057294 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.057448 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159496 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159633 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159654 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159692 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159718 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.159740 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.160181 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.160437 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.165930 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.166523 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.166951 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.171807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.182681 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"ceilometer-0\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.258165 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:31:32 crc kubenswrapper[4907]: I0127 18:31:32.780129 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.764998 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9601d7-af2b-4b4c-80cc-57a37df0f7f0" path="/var/lib/kubelet/pods/7f9601d7-af2b-4b4c-80cc-57a37df0f7f0/volumes" Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.804842 4907 generic.go:334] "Generic (PLEG): container finished" podID="52256d78-f327-4af2-9452-0483ad62dea0" containerID="9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b" exitCode=0 Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.804912 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerDied","Data":"9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b"} Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.809590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} Jan 27 18:31:33 crc kubenswrapper[4907]: I0127 18:31:33.809639 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"f3337d626f9eda7e83ce1fcded55327b44307f257a95be718e85ae9bc6d36459"} Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.844650 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.962644 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:34 crc kubenswrapper[4907]: I0127 18:31:34.963072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.370263 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439013 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439201 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.439325 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") pod \"52256d78-f327-4af2-9452-0483ad62dea0\" (UID: \"52256d78-f327-4af2-9452-0483ad62dea0\") " Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.456731 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts" (OuterVolumeSpecName: "scripts") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.456939 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq" (OuterVolumeSpecName: "kube-api-access-l2fzq") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "kube-api-access-l2fzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.473437 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.490509 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data" (OuterVolumeSpecName: "config-data") pod "52256d78-f327-4af2-9452-0483ad62dea0" (UID: "52256d78-f327-4af2-9452-0483ad62dea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542320 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542380 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542397 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52256d78-f327-4af2-9452-0483ad62dea0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.542408 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2fzq\" (UniqueName: \"kubernetes.io/projected/52256d78-f327-4af2-9452-0483ad62dea0-kube-api-access-l2fzq\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q8zd6" event={"ID":"52256d78-f327-4af2-9452-0483ad62dea0","Type":"ContainerDied","Data":"af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1"} Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858763 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4a3b4ffdd62b01ad6d9ca0f5d18a1416941e784b90330b9cd6b38c7e9e30f1" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.858764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q8zd6" Jan 27 18:31:35 crc kubenswrapper[4907]: I0127 18:31:35.861205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.011537 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.012216 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" containerID="cri-o://85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.013442 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" containerID="cri-o://2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.022151 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.022163 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.035119 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.039851 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" containerID="cri-o://0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.065333 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.065584 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" containerID="cri-o://047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.066098 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" containerID="cri-o://7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" gracePeriod=30 Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.869092 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.872625 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.874119 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:31:36 crc kubenswrapper[4907]: E0127 18:31:36.874290 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.879111 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerStarted","Data":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.879679 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.882880 4907 generic.go:334] "Generic (PLEG): container finished" podID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" exitCode=143 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.882953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.885911 4907 generic.go:334] "Generic (PLEG): container finished" podID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" exitCode=143 Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.885952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} Jan 27 18:31:36 crc kubenswrapper[4907]: I0127 18:31:36.911151 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.387380414 podStartE2EDuration="5.911126581s" podCreationTimestamp="2026-01-27 18:31:31 +0000 UTC" firstStartedPulling="2026-01-27 18:31:32.783163408 +0000 UTC m=+1547.912446020" lastFinishedPulling="2026-01-27 18:31:36.306909575 +0000 UTC m=+1551.436192187" observedRunningTime="2026-01-27 18:31:36.906691376 +0000 UTC m=+1552.035973998" watchObservedRunningTime="2026-01-27 18:31:36.911126581 +0000 UTC m=+1552.040409193" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.226466 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:44042->10.217.0.251:8775: read: connection reset by peer" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.226489 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": read tcp 10.217.0.2:44052->10.217.0.251:8775: read: connection reset by peer" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.776380 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855671 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855774 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.855960 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.856056 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.856112 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") pod \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\" (UID: \"d5aaba60-3b03-4a67-8862-7def0fe6f9d9\") " Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.858928 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs" (OuterVolumeSpecName: "logs") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.867893 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm" (OuterVolumeSpecName: "kube-api-access-959bm") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "kube-api-access-959bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.892913 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.930200 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data" (OuterVolumeSpecName: "config-data") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932661 4907 generic.go:334] "Generic (PLEG): container finished" podID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" exitCode=0 Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932710 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5aaba60-3b03-4a67-8862-7def0fe6f9d9","Type":"ContainerDied","Data":"d180305b0a777de2ed578a449a3d5ecb18240c9610de3f6b48a1a28bd4f905ad"} Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932750 4907 scope.go:117] "RemoveContainer" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.932874 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.941084 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d5aaba60-3b03-4a67-8862-7def0fe6f9d9" (UID: "d5aaba60-3b03-4a67-8862-7def0fe6f9d9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959207 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959247 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959261 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959bm\" (UniqueName: \"kubernetes.io/projected/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-kube-api-access-959bm\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959272 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:39 crc kubenswrapper[4907]: I0127 18:31:39.959282 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aaba60-3b03-4a67-8862-7def0fe6f9d9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.074898 4907 scope.go:117] "RemoveContainer" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.186014 4907 scope.go:117] "RemoveContainer" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.192709 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": container with ID starting with 7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af not found: ID does not exist" containerID="7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.192757 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af"} err="failed to get container status \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": rpc error: code = NotFound desc = could not find container \"7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af\": container with ID starting with 7f5ea6fe3565a57b696a7260115d274affbb128778cc1b22363d7ef4b6ff82af not found: ID does not exist" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.192789 4907 scope.go:117] "RemoveContainer" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.193361 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": container with ID starting with 047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177 not found: ID does not exist" containerID="047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.193396 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177"} err="failed to get container status \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": rpc error: code = NotFound desc = could not find container \"047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177\": container with ID starting with 047d0029aaa11e890b9243f37c5e4563a026fa2591608cf9356c972a0a728177 not found: ID does not exist" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.274954 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.290009 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311110 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311691 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311714 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311741 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311750 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: E0127 18:31:40.311781 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.311788 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312031 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-metadata" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312054 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="52256d78-f327-4af2-9452-0483ad62dea0" containerName="nova-manage" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.312088 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" containerName="nova-metadata-log" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.313729 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.324204 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.324399 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.341501 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386446 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386574 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386737 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.386938 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.387052 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489841 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.489947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490012 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.490798 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-logs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.508570 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.511807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.517428 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv78v\" (UniqueName: \"kubernetes.io/projected/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-kube-api-access-fv78v\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.517805 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7-config-data\") pod \"nova-metadata-0\" (UID: \"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7\") " pod="openstack/nova-metadata-0" Jan 27 18:31:40 crc kubenswrapper[4907]: I0127 18:31:40.678449 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.195159 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:31:41 crc kubenswrapper[4907]: W0127 18:31:41.199811 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e3cb7a2_d4f9_43bf_a1e5_6486f796f9a7.slice/crio-93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541 WatchSource:0}: Error finding container 93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541: Status 404 returned error can't find the container with id 93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.670623 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726281 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.726465 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") pod \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\" (UID: \"aa954535-3c88-43d5-ba61-2cc0c9c6690f\") " Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.739115 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz" (OuterVolumeSpecName: "kube-api-access-fv2nz") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "kube-api-access-fv2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.767516 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5aaba60-3b03-4a67-8862-7def0fe6f9d9" path="/var/lib/kubelet/pods/d5aaba60-3b03-4a67-8862-7def0fe6f9d9/volumes" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.794167 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data" (OuterVolumeSpecName: "config-data") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.800638 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa954535-3c88-43d5-ba61-2cc0c9c6690f" (UID: "aa954535-3c88-43d5-ba61-2cc0c9c6690f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829298 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829337 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa954535-3c88-43d5-ba61-2cc0c9c6690f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.829347 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv2nz\" (UniqueName: \"kubernetes.io/projected/aa954535-3c88-43d5-ba61-2cc0c9c6690f-kube-api-access-fv2nz\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.948348 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964350 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" exitCode=0 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964467 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerDied","Data":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964530 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa954535-3c88-43d5-ba61-2cc0c9c6690f","Type":"ContainerDied","Data":"5489fcea5b6ae89e6b35c046631a54afc66829b16e074f7f6498d1c0a256c442"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964542 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.964569 4907 scope.go:117] "RemoveContainer" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"1a4efe9c57671bc8a25a10a2ae69e4708cda26001d8d84203db8d766f1922bd5"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969053 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"69957ade714649bbef7c1477912eeef67da9190958c15fef866a608cd0ea10c2"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.969063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7","Type":"ContainerStarted","Data":"93689b772e877ea04d75168c846c03a0ba93a100ac6f10b76a69db699c485541"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978089 4907 generic.go:334] "Generic (PLEG): container finished" podID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" exitCode=0 Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978152 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978189 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55c85ba4-73c9-4126-8e07-9795c0cac323","Type":"ContainerDied","Data":"e4d6a13adca6c0e1bb31637f5bdeeb280fdc3bcb042efa87415ac3e6115add8f"} Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.978240 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:41 crc kubenswrapper[4907]: I0127 18:31:41.999793 4907 scope.go:117] "RemoveContainer" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.000306 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": container with ID starting with 0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708 not found: ID does not exist" containerID="0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.000346 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708"} err="failed to get container status \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": rpc error: code = NotFound desc = could not find container \"0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708\": container with ID starting with 0913ab36f2c021487d4038e0bd478960db8e4300d591f183f9e30b4995a82708 not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.000374 4907 scope.go:117] "RemoveContainer" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.022765 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.022747094 podStartE2EDuration="2.022747094s" podCreationTimestamp="2026-01-27 18:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:42.017202047 +0000 UTC m=+1557.146484659" watchObservedRunningTime="2026-01-27 18:31:42.022747094 +0000 UTC m=+1557.152029706" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033573 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033646 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033929 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.033992 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") pod \"55c85ba4-73c9-4126-8e07-9795c0cac323\" (UID: \"55c85ba4-73c9-4126-8e07-9795c0cac323\") " Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.034267 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs" (OuterVolumeSpecName: "logs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.038380 4907 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c85ba4-73c9-4126-8e07-9795c0cac323-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.047743 4907 scope.go:117] "RemoveContainer" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.051836 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g" (OuterVolumeSpecName: "kube-api-access-jqn6g") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "kube-api-access-jqn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.083736 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.109320 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data" (OuterVolumeSpecName: "config-data") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.109624 4907 scope.go:117] "RemoveContainer" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.110990 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": container with ID starting with 85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1 not found: ID does not exist" containerID="85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111034 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1"} err="failed to get container status \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": rpc error: code = NotFound desc = could not find container \"85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1\": container with ID starting with 85097b8a683a7453ec24f2da161edaaa54bbbf3d0bba95a976ef05e5781b82d1 not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111062 4907 scope.go:117] "RemoveContainer" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.111469 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": container with ID starting with 2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee not found: ID does not exist" containerID="2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.111492 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee"} err="failed to get container status \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": rpc error: code = NotFound desc = could not find container \"2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee\": container with ID starting with 2ccd84ee11bf82a3af3e4a43a9962a41e35974615c36fe4d9abebd24e9a773ee not found: ID does not exist" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.113944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.123901 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141539 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqn6g\" (UniqueName: \"kubernetes.io/projected/55c85ba4-73c9-4126-8e07-9795c0cac323-kube-api-access-jqn6g\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141812 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.141916 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.158414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159289 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159316 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159354 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159362 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: E0127 18:31:42.159389 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159396 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159762 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-log" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159780 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" containerName="nova-scheduler-scheduler" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.159801 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" containerName="nova-api-api" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.161289 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.163725 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.173501 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.175836 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.177969 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55c85ba4-73c9-4126-8e07-9795c0cac323" (UID: "55c85ba4-73c9-4126-8e07-9795c0cac323"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244106 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244423 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.244650 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.245287 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.245305 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c85ba4-73c9-4126-8e07-9795c0cac323-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.314135 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.329292 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.339629 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.341814 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348037 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348226 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.348315 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.351911 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.352649 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-config-data\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355177 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355193 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.355327 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.357085 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.375089 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7tz7\" (UniqueName: \"kubernetes.io/projected/cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a-kube-api-access-c7tz7\") pod \"nova-scheduler-0\" (UID: \"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a\") " pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.450647 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.450699 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451377 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451688 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451853 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.451992 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.493419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.554809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555218 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555488 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555652 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.555675 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.556213 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafbf219-964f-4436-964e-7ad85e0eb56b-logs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.559316 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-config-data\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.560594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.562270 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.566519 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafbf219-964f-4436-964e-7ad85e0eb56b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.574540 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgtj\" (UniqueName: \"kubernetes.io/projected/aafbf219-964f-4436-964e-7ad85e0eb56b-kube-api-access-4lgtj\") pod \"nova-api-0\" (UID: \"aafbf219-964f-4436-964e-7ad85e0eb56b\") " pod="openstack/nova-api-0" Jan 27 18:31:42 crc kubenswrapper[4907]: I0127 18:31:42.746889 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.039145 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.245198 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:31:43 crc kubenswrapper[4907]: W0127 18:31:43.246112 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaafbf219_964f_4436_964e_7ad85e0eb56b.slice/crio-87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf WatchSource:0}: Error finding container 87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf: Status 404 returned error can't find the container with id 87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.768498 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c85ba4-73c9-4126-8e07-9795c0cac323" path="/var/lib/kubelet/pods/55c85ba4-73c9-4126-8e07-9795c0cac323/volumes" Jan 27 18:31:43 crc kubenswrapper[4907]: I0127 18:31:43.771226 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa954535-3c88-43d5-ba61-2cc0c9c6690f" path="/var/lib/kubelet/pods/aa954535-3c88-43d5-ba61-2cc0c9c6690f/volumes" Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012262 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"0fef0bc5290e7ec8b0c52cf6f1b31c87680e6f9bfa2ea0549a9e48e13d3a07ab"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012776 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"0c3f8522c97a29de9331926b2f53fe83a52df5413bbf670ee81fa26653dca301"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.012795 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafbf219-964f-4436-964e-7ad85e0eb56b","Type":"ContainerStarted","Data":"87a41da3d1b6ce51557ca53a8fd969b5c9a53b8c2b33699b6113af8304f4afcf"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.016233 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a","Type":"ContainerStarted","Data":"64bf0ca613eef128a86b6db3405f955e3d83c371fae524e00fbf47188789f929"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.016283 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a","Type":"ContainerStarted","Data":"01901d4f80469b3bf5d5105ef6a7ab23158667a4011526df2930978c249e7c05"} Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.060511 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.060460113 podStartE2EDuration="2.060460113s" podCreationTimestamp="2026-01-27 18:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:44.033613435 +0000 UTC m=+1559.162896057" watchObservedRunningTime="2026-01-27 18:31:44.060460113 +0000 UTC m=+1559.189742725" Jan 27 18:31:44 crc kubenswrapper[4907]: I0127 18:31:44.065999 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.06597856 podStartE2EDuration="2.06597856s" podCreationTimestamp="2026-01-27 18:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:44.051609493 +0000 UTC m=+1559.180892105" watchObservedRunningTime="2026-01-27 18:31:44.06597856 +0000 UTC m=+1559.195261172" Jan 27 18:31:45 crc kubenswrapper[4907]: I0127 18:31:45.679425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:31:45 crc kubenswrapper[4907]: I0127 18:31:45.679472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:31:47 crc kubenswrapper[4907]: I0127 18:31:47.495062 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:31:50 crc kubenswrapper[4907]: I0127 18:31:50.679574 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:50 crc kubenswrapper[4907]: I0127 18:31:50.680200 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:31:51 crc kubenswrapper[4907]: I0127 18:31:51.694705 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:51 crc kubenswrapper[4907]: I0127 18:31:51.694817 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.495415 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.542462 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.747937 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:52 crc kubenswrapper[4907]: I0127 18:31:52.747987 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.161548 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.762773 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafbf219-964f-4436-964e-7ad85e0eb56b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:53 crc kubenswrapper[4907]: I0127 18:31:53.762831 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafbf219-964f-4436-964e-7ad85e0eb56b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.523771 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.524113 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.524170 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.525105 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:31:56 crc kubenswrapper[4907]: I0127 18:31:56.525155 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" gracePeriod=600 Jan 27 18:31:56 crc kubenswrapper[4907]: E0127 18:31:56.646182 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174075 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" exitCode=0 Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402"} Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.174176 4907 scope.go:117] "RemoveContainer" containerID="09255cfe56907a7b3b5ba34ba9dd0c7542d64f0e4b965b5da61b9cf87189cb31" Jan 27 18:31:57 crc kubenswrapper[4907]: I0127 18:31:57.175062 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:31:57 crc kubenswrapper[4907]: E0127 18:31:57.175422 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:31:59 crc kubenswrapper[4907]: I0127 18:31:59.920459 4907 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice" Jan 27 18:31:59 crc kubenswrapper[4907]: E0127 18:31:59.920866 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8dbf3816-36b8-40ed-8dc6-3faf4b571dd6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8dbf3816_36b8_40ed_8dc6_3faf4b571dd6.slice" pod="openstack/dnsmasq-dns-7877d89589-nft4l" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.208449 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nft4l" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.254214 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.270847 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nft4l"] Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.686047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.689384 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:32:00 crc kubenswrapper[4907]: I0127 18:32:00.696652 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:32:01 crc kubenswrapper[4907]: I0127 18:32:01.225674 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:32:01 crc kubenswrapper[4907]: I0127 18:32:01.761743 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dbf3816-36b8-40ed-8dc6-3faf4b571dd6" path="/var/lib/kubelet/pods/8dbf3816-36b8-40ed-8dc6-3faf4b571dd6/volumes" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.270394 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756070 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756128 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.756936 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.757460 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.765297 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:32:02 crc kubenswrapper[4907]: I0127 18:32:02.765342 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.315224 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.317679 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" containerID="cri-o://b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" gracePeriod=30 Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.434994 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:07 crc kubenswrapper[4907]: I0127 18:32:07.435235 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" containerID="cri-o://42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" gracePeriod=30 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.062385 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.069818 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173582 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173799 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") pod \"f00f131e-56a8-4fae-a498-798713d2159f\" (UID: \"f00f131e-56a8-4fae-a498-798713d2159f\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.173932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") pod \"a0791214-d591-446c-a64a-e1e0f237392e\" (UID: \"a0791214-d591-446c-a64a-e1e0f237392e\") " Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.180474 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr" (OuterVolumeSpecName: "kube-api-access-vczdr") pod "f00f131e-56a8-4fae-a498-798713d2159f" (UID: "f00f131e-56a8-4fae-a498-798713d2159f"). InnerVolumeSpecName "kube-api-access-vczdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.183057 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc" (OuterVolumeSpecName: "kube-api-access-98fhc") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "kube-api-access-98fhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.208493 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.240982 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data" (OuterVolumeSpecName: "config-data") pod "a0791214-d591-446c-a64a-e1e0f237392e" (UID: "a0791214-d591-446c-a64a-e1e0f237392e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277324 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fhc\" (UniqueName: \"kubernetes.io/projected/a0791214-d591-446c-a64a-e1e0f237392e-kube-api-access-98fhc\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277357 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277369 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vczdr\" (UniqueName: \"kubernetes.io/projected/f00f131e-56a8-4fae-a498-798713d2159f-kube-api-access-vczdr\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.277377 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0791214-d591-446c-a64a-e1e0f237392e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318557 4907 generic.go:334] "Generic (PLEG): container finished" podID="a0791214-d591-446c-a64a-e1e0f237392e" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" exitCode=2 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318632 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318671 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerDied","Data":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318727 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a0791214-d591-446c-a64a-e1e0f237392e","Type":"ContainerDied","Data":"77ad6c9e5a77976fb05e5e7d1ba3b89c46ab28c4b5ebd3785c45814aff8d537c"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.318758 4907 scope.go:117] "RemoveContainer" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321333 4907 generic.go:334] "Generic (PLEG): container finished" podID="f00f131e-56a8-4fae-a498-798713d2159f" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" exitCode=2 Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321364 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerDied","Data":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321384 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f00f131e-56a8-4fae-a498-798713d2159f","Type":"ContainerDied","Data":"3673b3443d4ba1d7f90e11d19590b6b725d3fd74d821289ba3dea4614690e212"} Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.321448 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.367702 4907 scope.go:117] "RemoveContainer" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.368252 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": container with ID starting with 42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129 not found: ID does not exist" containerID="42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.368276 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129"} err="failed to get container status \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": rpc error: code = NotFound desc = could not find container \"42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129\": container with ID starting with 42a33b2632c4cbb65d252562ef05e11db0af7489a8b9ca360e01edb5bbb86129 not found: ID does not exist" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.368304 4907 scope.go:117] "RemoveContainer" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.384967 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.421799 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.439874 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.452511 4907 scope.go:117] "RemoveContainer" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.453070 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": container with ID starting with b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13 not found: ID does not exist" containerID="b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.453108 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13"} err="failed to get container status \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": rpc error: code = NotFound desc = could not find container \"b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13\": container with ID starting with b57fec2667eb94dc13d611d91f3434da162f945effedc55f932a4633235a5f13 not found: ID does not exist" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.456676 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.457312 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457341 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.457371 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457380 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457673 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00f131e-56a8-4fae-a498-798713d2159f" containerName="kube-state-metrics" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.457710 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0791214-d591-446c-a64a-e1e0f237392e" containerName="mysqld-exporter" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.458693 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.462080 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.465730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.478647 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.499084 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.536470 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.540927 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.544083 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.544408 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.565062 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.584978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585348 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585678 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.585844 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.687849 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.687927 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688030 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688069 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688130 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688169 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.688192 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.693117 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.693793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.696344 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/611de5af-e33a-4aca-88c7-201f7c0e6cf9-config-data\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.710226 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ffht\" (UniqueName: \"kubernetes.io/projected/611de5af-e33a-4aca-88c7-201f7c0e6cf9-kube-api-access-8ffht\") pod \"mysqld-exporter-0\" (UID: \"611de5af-e33a-4aca-88c7-201f7c0e6cf9\") " pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.748195 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:08 crc kubenswrapper[4907]: E0127 18:32:08.748608 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790754 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.790902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.797015 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.801572 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.805147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.815441 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv82k\" (UniqueName: \"kubernetes.io/projected/edbdf1e9-d0d7-458d-8f5a-891ee37d7483-kube-api-access-fv82k\") pod \"kube-state-metrics-0\" (UID: \"edbdf1e9-d0d7-458d-8f5a-891ee37d7483\") " pod="openstack/kube-state-metrics-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.818235 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 18:32:08 crc kubenswrapper[4907]: I0127 18:32:08.877876 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.382194 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.398462 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576020 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576277 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" containerID="cri-o://e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576319 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" containerID="cri-o://1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576387 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" containerID="cri-o://d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.576427 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" containerID="cri-o://e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" gracePeriod=30 Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.762663 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0791214-d591-446c-a64a-e1e0f237392e" path="/var/lib/kubelet/pods/a0791214-d591-446c-a64a-e1e0f237392e/volumes" Jan 27 18:32:09 crc kubenswrapper[4907]: I0127 18:32:09.763399 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00f131e-56a8-4fae-a498-798713d2159f" path="/var/lib/kubelet/pods/f00f131e-56a8-4fae-a498-798713d2159f/volumes" Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.355608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"611de5af-e33a-4aca-88c7-201f7c0e6cf9","Type":"ContainerStarted","Data":"91a848f3efb02c072ae558b808c261391451a03401d7d4ed16a718774fe79122"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367766 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" exitCode=0 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367797 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" exitCode=2 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367806 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" exitCode=0 Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367860 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.367952 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.376855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbdf1e9-d0d7-458d-8f5a-891ee37d7483","Type":"ContainerStarted","Data":"87483b1d68dc87cf54e16981be8688b158dadb7daaef41a7fa9ac3faca7758a1"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.376896 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbdf1e9-d0d7-458d-8f5a-891ee37d7483","Type":"ContainerStarted","Data":"5cc4b3eb790d0aac57b8db1df10e3659f7748f967443434090bec45b5fad5faf"} Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.377056 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 18:32:10 crc kubenswrapper[4907]: I0127 18:32:10.406920 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.015570129 podStartE2EDuration="2.406899419s" podCreationTimestamp="2026-01-27 18:32:08 +0000 UTC" firstStartedPulling="2026-01-27 18:32:09.393598521 +0000 UTC m=+1584.522881123" lastFinishedPulling="2026-01-27 18:32:09.784927801 +0000 UTC m=+1584.914210413" observedRunningTime="2026-01-27 18:32:10.392935174 +0000 UTC m=+1585.522217806" watchObservedRunningTime="2026-01-27 18:32:10.406899419 +0000 UTC m=+1585.536182031" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.326658 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.391774 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"611de5af-e33a-4aca-88c7-201f7c0e6cf9","Type":"ContainerStarted","Data":"7815e0a6478469eafb9c980413aea2a77a2cb76f6a019b1bac21d93d286bc8ba"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405646 4907 generic.go:334] "Generic (PLEG): container finished" podID="68580f4b-e3d6-44f3-bff6-55be77887563" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" exitCode=0 Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405692 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405744 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68580f4b-e3d6-44f3-bff6-55be77887563","Type":"ContainerDied","Data":"f3337d626f9eda7e83ce1fcded55327b44307f257a95be718e85ae9bc6d36459"} Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405770 4907 scope.go:117] "RemoveContainer" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.405789 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.426435 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.618464967 podStartE2EDuration="3.426413141s" podCreationTimestamp="2026-01-27 18:32:08 +0000 UTC" firstStartedPulling="2026-01-27 18:32:09.395440733 +0000 UTC m=+1584.524723345" lastFinishedPulling="2026-01-27 18:32:10.203388907 +0000 UTC m=+1585.332671519" observedRunningTime="2026-01-27 18:32:11.419541687 +0000 UTC m=+1586.548824299" watchObservedRunningTime="2026-01-27 18:32:11.426413141 +0000 UTC m=+1586.555695753" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.446912 4907 scope.go:117] "RemoveContainer" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457336 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457407 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457490 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457532 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.457654 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458137 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") pod \"68580f4b-e3d6-44f3-bff6-55be77887563\" (UID: \"68580f4b-e3d6-44f3-bff6-55be77887563\") " Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458149 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.458356 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.459002 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.459222 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68580f4b-e3d6-44f3-bff6-55be77887563-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.466764 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5" (OuterVolumeSpecName: "kube-api-access-mkkc5") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "kube-api-access-mkkc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.468765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts" (OuterVolumeSpecName: "scripts") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.484939 4907 scope.go:117] "RemoveContainer" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.502632 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.565874 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkc5\" (UniqueName: \"kubernetes.io/projected/68580f4b-e3d6-44f3-bff6-55be77887563-kube-api-access-mkkc5\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.566110 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.566219 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.634491 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data" (OuterVolumeSpecName: "config-data") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.644962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68580f4b-e3d6-44f3-bff6-55be77887563" (UID: "68580f4b-e3d6-44f3-bff6-55be77887563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.667951 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.667988 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68580f4b-e3d6-44f3-bff6-55be77887563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.721488 4907 scope.go:117] "RemoveContainer" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.751905 4907 scope.go:117] "RemoveContainer" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.763295 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": container with ID starting with 1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85 not found: ID does not exist" containerID="1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.763364 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85"} err="failed to get container status \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": rpc error: code = NotFound desc = could not find container \"1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85\": container with ID starting with 1e7193575d329bf730b1059c35358b0461904e3f14940d6afc68ee2ba15b4f85 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.763402 4907 scope.go:117] "RemoveContainer" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.764044 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": container with ID starting with d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213 not found: ID does not exist" containerID="d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764076 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213"} err="failed to get container status \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": rpc error: code = NotFound desc = could not find container \"d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213\": container with ID starting with d026764c8e8c77623aa3a8f5a8c73e1a71c6a5a01e588a35be3760e066338213 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764097 4907 scope.go:117] "RemoveContainer" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.764640 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": container with ID starting with e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2 not found: ID does not exist" containerID="e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764682 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2"} err="failed to get container status \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": rpc error: code = NotFound desc = could not find container \"e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2\": container with ID starting with e731b89405176fb8318e1c860fc00160d109bae1fda6da7d1cd3f368937bc5e2 not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.764703 4907 scope.go:117] "RemoveContainer" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.765030 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": container with ID starting with e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb not found: ID does not exist" containerID="e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.765094 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb"} err="failed to get container status \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": rpc error: code = NotFound desc = could not find container \"e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb\": container with ID starting with e03628a2fbb9cb2499efe960aa73c71f3eead27ff13f3e0201071622eb34d8cb not found: ID does not exist" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.770618 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.790778 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.809439 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810092 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810117 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810151 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810161 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810182 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810189 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: E0127 18:32:11.810200 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810208 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810465 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-central-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810494 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="proxy-httpd" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810514 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="sg-core" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.810537 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" containerName="ceilometer-notification-agent" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.813118 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.818108 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.818872 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.819125 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.826133 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877652 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877735 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.877768 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878072 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878115 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878159 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878266 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.878392 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979815 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979936 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.979979 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980066 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980087 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980123 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.980172 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.982124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.982292 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.983711 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.983966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.984013 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.984281 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.991399 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:11 crc kubenswrapper[4907]: I0127 18:32:11.998948 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"ceilometer-0\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " pod="openstack/ceilometer-0" Jan 27 18:32:12 crc kubenswrapper[4907]: I0127 18:32:12.139409 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:12 crc kubenswrapper[4907]: I0127 18:32:12.613758 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:13 crc kubenswrapper[4907]: I0127 18:32:13.443929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"4257897db24798af95d4509907c4c709e003d71b1674cb5e114aba590d0cac1f"} Jan 27 18:32:13 crc kubenswrapper[4907]: I0127 18:32:13.774291 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68580f4b-e3d6-44f3-bff6-55be77887563" path="/var/lib/kubelet/pods/68580f4b-e3d6-44f3-bff6-55be77887563/volumes" Jan 27 18:32:14 crc kubenswrapper[4907]: I0127 18:32:14.456743 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d"} Jan 27 18:32:15 crc kubenswrapper[4907]: I0127 18:32:15.512527 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99"} Jan 27 18:32:15 crc kubenswrapper[4907]: I0127 18:32:15.512932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267"} Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.159670 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.174677 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rl9vb"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.244414 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.246254 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.280937 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335240 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.335426 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437584 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437669 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.437735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.447649 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.452806 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.455838 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"heat-db-sync-6xh4v\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.560302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerStarted","Data":"7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc"} Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.560485 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.571096 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.588463 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.950741704 podStartE2EDuration="6.588439081s" podCreationTimestamp="2026-01-27 18:32:11 +0000 UTC" firstStartedPulling="2026-01-27 18:32:12.618378269 +0000 UTC m=+1587.747660881" lastFinishedPulling="2026-01-27 18:32:16.256075646 +0000 UTC m=+1591.385358258" observedRunningTime="2026-01-27 18:32:17.581595388 +0000 UTC m=+1592.710878010" watchObservedRunningTime="2026-01-27 18:32:17.588439081 +0000 UTC m=+1592.717721693" Jan 27 18:32:17 crc kubenswrapper[4907]: I0127 18:32:17.787398 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ffb508-65d2-4c20-95db-209a1c9a3399" path="/var/lib/kubelet/pods/90ffb508-65d2-4c20-95db-209a1c9a3399/volumes" Jan 27 18:32:18 crc kubenswrapper[4907]: W0127 18:32:18.168227 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67fd41b_79b0_4ab4_86b6_816389597620.slice/crio-05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b WatchSource:0}: Error finding container 05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b: Status 404 returned error can't find the container with id 05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.170997 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.577498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerStarted","Data":"05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b"} Jan 27 18:32:18 crc kubenswrapper[4907]: I0127 18:32:18.897516 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.582327 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.754489 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:19 crc kubenswrapper[4907]: E0127 18:32:19.754901 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.837621 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.837864 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" containerID="cri-o://0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838394 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" containerID="cri-o://7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838452 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" containerID="cri-o://b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" gracePeriod=30 Jan 27 18:32:19 crc kubenswrapper[4907]: I0127 18:32:19.838487 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" containerID="cri-o://57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" gracePeriod=30 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.652859 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653162 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" exitCode=2 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653175 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.653183 4907 generic.go:334] "Generic (PLEG): container finished" podID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerID="0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654396 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654464 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654476 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.654496 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d"} Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.834139 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.851382 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951545 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951690 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951721 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951758 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951808 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951855 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.951972 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.952050 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") pod \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\" (UID: \"51f8c374-8d1f-4229-a1de-d25e2bceffb8\") " Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.954376 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.954617 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.976502 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts" (OuterVolumeSpecName: "scripts") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:20 crc kubenswrapper[4907]: I0127 18:32:20.982182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm" (OuterVolumeSpecName: "kube-api-access-m6jlm") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "kube-api-access-m6jlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.014396 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055005 4907 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055034 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055045 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6jlm\" (UniqueName: \"kubernetes.io/projected/51f8c374-8d1f-4229-a1de-d25e2bceffb8-kube-api-access-m6jlm\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055054 4907 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51f8c374-8d1f-4229-a1de-d25e2bceffb8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.055062 4907 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.085327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.158483 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.164397 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.193081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data" (OuterVolumeSpecName: "config-data") pod "51f8c374-8d1f-4229-a1de-d25e2bceffb8" (UID: "51f8c374-8d1f-4229-a1de-d25e2bceffb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.261315 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.261360 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f8c374-8d1f-4229-a1de-d25e2bceffb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684329 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51f8c374-8d1f-4229-a1de-d25e2bceffb8","Type":"ContainerDied","Data":"4257897db24798af95d4509907c4c709e003d71b1674cb5e114aba590d0cac1f"} Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684395 4907 scope.go:117] "RemoveContainer" containerID="7611af9be5c0b7c8fed77f8e1aa222a88cd393352d94555831c5e69faa414bfc" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.684398 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.786775 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.792112 4907 scope.go:117] "RemoveContainer" containerID="b3f1730873e2785f2a796a6fc33ba56845f523818c82bbd4491683257929ca99" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.800705 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833285 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833911 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833934 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833950 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833961 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.833983 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.833991 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: E0127 18:32:21.834013 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834020 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834223 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="proxy-httpd" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834241 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="sg-core" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834260 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-notification-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.834279 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" containerName="ceilometer-central-agent" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.836437 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.844356 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.846935 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.847910 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.848208 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.859250 4907 scope.go:117] "RemoveContainer" containerID="57c2482c14bee8e364ab3d823bb4cd44ae0406b6a218b4e7468fb51a0dc8f267" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879071 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879147 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879225 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879253 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879315 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879412 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879463 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.879580 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.893507 4907 scope.go:117] "RemoveContainer" containerID="0fa09af5df626bd9fe447ce93e6b787c16234ec1ae7063e37cd602cdb87ddd4d" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981661 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981833 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981875 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.981980 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982174 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982246 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982538 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.982713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.990793 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.994363 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.995164 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.999319 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-config-data\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:21 crc kubenswrapper[4907]: I0127 18:32:21.999376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-scripts\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.001260 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v86x\" (UniqueName: \"kubernetes.io/projected/8cc0b779-ca13-49be-91c1-ea2eb4a99d9c-kube-api-access-8v86x\") pod \"ceilometer-0\" (UID: \"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c\") " pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.187192 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:32:22 crc kubenswrapper[4907]: I0127 18:32:22.778515 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:32:22 crc kubenswrapper[4907]: W0127 18:32:22.787324 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cc0b779_ca13_49be_91c1_ea2eb4a99d9c.slice/crio-bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe WatchSource:0}: Error finding container bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe: Status 404 returned error can't find the container with id bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe Jan 27 18:32:23 crc kubenswrapper[4907]: I0127 18:32:23.790723 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f8c374-8d1f-4229-a1de-d25e2bceffb8" path="/var/lib/kubelet/pods/51f8c374-8d1f-4229-a1de-d25e2bceffb8/volumes" Jan 27 18:32:23 crc kubenswrapper[4907]: I0127 18:32:23.792105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"bd2e92a9ba6f9f61093f2a68f1985e94105299b0eb4c0fac63a95d4c32cfdcbe"} Jan 27 18:32:25 crc kubenswrapper[4907]: I0127 18:32:25.070328 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" containerID="cri-o://435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" gracePeriod=604795 Jan 27 18:32:25 crc kubenswrapper[4907]: I0127 18:32:25.934070 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" containerID="cri-o://dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" gracePeriod=604795 Jan 27 18:32:29 crc kubenswrapper[4907]: I0127 18:32:29.640859 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 27 18:32:30 crc kubenswrapper[4907]: I0127 18:32:30.019959 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 27 18:32:31 crc kubenswrapper[4907]: I0127 18:32:31.878616 4907 generic.go:334] "Generic (PLEG): container finished" podID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerID="435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" exitCode=0 Jan 27 18:32:31 crc kubenswrapper[4907]: I0127 18:32:31.878712 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406"} Jan 27 18:32:33 crc kubenswrapper[4907]: I0127 18:32:33.904660 4907 generic.go:334] "Generic (PLEG): container finished" podID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerID="dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" exitCode=0 Jan 27 18:32:33 crc kubenswrapper[4907]: I0127 18:32:33.904739 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8"} Jan 27 18:32:34 crc kubenswrapper[4907]: I0127 18:32:34.748239 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:34 crc kubenswrapper[4907]: E0127 18:32:34.748853 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.556245 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.560758 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.564820 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.582742 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665336 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665714 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665809 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.665973 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.666011 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768420 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768475 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768609 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768639 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768706 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768736 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.768762 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.769867 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.770657 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.771272 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.772258 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.773026 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.773717 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.800616 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"dnsmasq-dns-594cb89c79-7hqh2\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:38 crc kubenswrapper[4907]: I0127 18:32:38.920320 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.179927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.183067 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.196082 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.268480 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.302770 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303544 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303715 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.303754 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406641 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406704 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406751 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.406782 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.407509 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408102 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408372 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408406 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408430 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408590 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408616 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408649 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408693 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408715 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408874 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408901 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.408967 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409033 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409064 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.409129 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") pod \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\" (UID: \"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e\") " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.410572 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.411145 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.411228 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412431 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412587 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.412633 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.415945 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.417599 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.422599 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.423289 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.424477 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.427104 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.427473 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q" (OuterVolumeSpecName: "kube-api-access-52b6q") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "kube-api-access-52b6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.435507 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info" (OuterVolumeSpecName: "pod-info") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.437603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.439482 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.443947 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"certified-operators-f75nb\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.460727 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj" (OuterVolumeSpecName: "kube-api-access-v88pj") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "kube-api-access-v88pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.475689 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info" (OuterVolumeSpecName: "pod-info") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: E0127 18:32:40.514566 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf podName:52cb02a9-7a60-4761-9770-a9b6910f1088 nodeName:}" failed. No retries permitted until 2026-01-27 18:32:41.014521051 +0000 UTC m=+1616.143803713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515371 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515397 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515407 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515416 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515425 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515433 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515443 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52cb02a9-7a60-4761-9770-a9b6910f1088-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515451 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52b6q\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-kube-api-access-52b6q\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515458 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515466 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515473 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52cb02a9-7a60-4761-9770-a9b6910f1088-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515482 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88pj\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-kube-api-access-v88pj\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515492 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.515499 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.535826 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data" (OuterVolumeSpecName: "config-data") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.558308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622" (OuterVolumeSpecName: "persistence") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "pvc-2544df99-ce65-431e-b41d-029cd6318622". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.607918 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf" (OuterVolumeSpecName: "server-conf") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.609589 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data" (OuterVolumeSpecName: "config-data") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.623739 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627498 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627539 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") on node \"crc\" " Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627550 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.627582 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52cb02a9-7a60-4761-9770-a9b6910f1088-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.655797 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf" (OuterVolumeSpecName: "server-conf") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.666163 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.666335 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2544df99-ce65-431e-b41d-029cd6318622" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622") on node "crc" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.697447 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" (UID: "7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730223 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730258 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.730270 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.744274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.833218 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52cb02a9-7a60-4761-9770-a9b6910f1088-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e","Type":"ContainerDied","Data":"3251ec13ecf2d816f4249d2d95826865dd55f4c6e4f346e728a8820870c8122f"} Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980286 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.980294 4907 scope.go:117] "RemoveContainer" containerID="435ff92660fb60aba6fab546f0ce4c4bc90dec4040ba11caa3221944cf5e0406" Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.983347 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52cb02a9-7a60-4761-9770-a9b6910f1088","Type":"ContainerDied","Data":"ef5b60bb5a09fc8310da2150429f54ee5d10a08c6bea32b06c65111f27a03d40"} Jan 27 18:32:40 crc kubenswrapper[4907]: I0127 18:32:40.983418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.018110 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.039736 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"52cb02a9-7a60-4761-9770-a9b6910f1088\" (UID: \"52cb02a9-7a60-4761-9770-a9b6910f1088\") " Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.044310 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.064504 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf" (OuterVolumeSpecName: "persistence") pod "52cb02a9-7a60-4761-9770-a9b6910f1088" (UID: "52cb02a9-7a60-4761-9770-a9b6910f1088"). InnerVolumeSpecName "pvc-c92dd174-2681-4ccd-ace7-bb768c862acf". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.069745 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070395 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070421 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070459 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070466 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070480 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070489 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: E0127 18:32:41.070527 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070538 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="setup-container" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070793 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.070806 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.072270 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.085399 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.144984 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145590 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145744 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.145920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146171 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146300 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146394 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.146683 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.147390 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") on node \"crc\" " Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.208485 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.209962 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c92dd174-2681-4ccd-ace7-bb768c862acf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf") on node "crc" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.219741 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.249860 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250179 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250359 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.250516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253588 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253745 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253844 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253989 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254162 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254363 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254873 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.253246 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.255296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-config-data\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.257588 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262201 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.254579 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.262845 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.263421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.265345 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.265375 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ec55be154a66d09157b0ca2623a596d4c9f6b8adde5f16f336c822c2282072f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.274694 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.278580 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.280486 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdvmw\" (UniqueName: \"kubernetes.io/projected/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-kube-api-access-sdvmw\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.282910 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f8e936e-82a6-49cc-bb09-d247a2d0e47b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.285167 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.287313 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.287321 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291485 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291780 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291798 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fl6zh" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.291930 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.292043 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.293287 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358376 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358504 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358533 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358593 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358622 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358648 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358826 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358853 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358870 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.358914 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.360302 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2544df99-ce65-431e-b41d-029cd6318622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2544df99-ce65-431e-b41d-029cd6318622\") pod \"rabbitmq-server-2\" (UID: \"5f8e936e-82a6-49cc-bb09-d247a2d0e47b\") " pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461242 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461306 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461342 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461397 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461697 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461729 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461772 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461830 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461861 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.461889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.462290 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.463415 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.463442 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d92d749e8b6234664dd57319b2b5b7962d8bfa8dc2f0d92cbae41209d539d4c4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.464999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.465208 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.465710 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.467009 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/021272d4-b660-4c16-b9a6-befd84abe2cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.467257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.469526 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/021272d4-b660-4c16-b9a6-befd84abe2cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.470197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.470991 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/021272d4-b660-4c16-b9a6-befd84abe2cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.478501 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.480724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gdp\" (UniqueName: \"kubernetes.io/projected/021272d4-b660-4c16-b9a6-befd84abe2cc-kube-api-access-p7gdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.519917 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c92dd174-2681-4ccd-ace7-bb768c862acf\") pod \"rabbitmq-cell1-server-0\" (UID: \"021272d4-b660-4c16-b9a6-befd84abe2cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.716207 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.812049 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" path="/var/lib/kubelet/pods/52cb02a9-7a60-4761-9770-a9b6910f1088/volumes" Jan 27 18:32:41 crc kubenswrapper[4907]: I0127 18:32:41.815028 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" path="/var/lib/kubelet/pods/7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e/volumes" Jan 27 18:32:44 crc kubenswrapper[4907]: I0127 18:32:44.641686 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7b7e8d60-c21c-4fe4-b6f6-4d8806ddc67e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Jan 27 18:32:45 crc kubenswrapper[4907]: I0127 18:32:45.020689 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52cb02a9-7a60-4761-9770-a9b6910f1088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407198 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407758 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.407913 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6xh4v_openstack(a67fd41b-79b0-4ab4-86b6-816389597620): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.409351 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6xh4v" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.749023 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.750193 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755640 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755694 4907 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 27 18:32:47 crc kubenswrapper[4907]: E0127 18:32:47.755829 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66dh579h677hfbh58ch597h56h674h5ddh6dhd7h57bh66ch5ddh5bfh5dch5bfh88h565h696h9dhbh546h667h666h5b4h558h654h585h65fh5fbh67fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v86x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8cc0b779-ca13-49be-91c1-ea2eb4a99d9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.787764 4907 scope.go:117] "RemoveContainer" containerID="6e1c166ec4ad12335939eace84afc80867bd30207c4badea742d3beea9a3565a" Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.788227 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:32:47 crc kubenswrapper[4907]: I0127 18:32:47.962410 4907 scope.go:117] "RemoveContainer" containerID="dbafa8ebc75d2673abdb01c053a4823df486a81a9c9d8589b2c27036b362c6f8" Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.066207 4907 scope.go:117] "RemoveContainer" containerID="008c3a3f99a2ccc59327a0f9a489a17aa72fc4b82aca7d17aabd1500b22d4c8e" Jan 27 18:32:48 crc kubenswrapper[4907]: E0127 18:32:48.127966 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-6xh4v" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.630570 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.646943 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5364919_e030_4b8d_a22d_708b6c7bd0cb.slice/crio-b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3 WatchSource:0}: Error finding container b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3: Status 404 returned error can't find the container with id b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.646971 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.652437 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f8e936e_82a6_49cc_bb09_d247a2d0e47b.slice/crio-fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6 WatchSource:0}: Error finding container fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6: Status 404 returned error can't find the container with id fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.660102 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 18:32:48 crc kubenswrapper[4907]: W0127 18:32:48.667929 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189c0f02_da43_4eb5_9cf1_ff9154e1a952.slice/crio-7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4 WatchSource:0}: Error finding container 7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4: Status 404 returned error can't find the container with id 7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4 Jan 27 18:32:48 crc kubenswrapper[4907]: I0127 18:32:48.673524 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.105212 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"92684dc2eb115bb505f833aa3fb3a06a87b54f5594cffbaea4a66b68ceb77d0d"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.106748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"fcc164e1e7f99125701c7e9c89a4ae237a503d80104f2f9a3227b96b6dac9fd6"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.108070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.108097 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.110813 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1"} Jan 27 18:32:49 crc kubenswrapper[4907]: I0127 18:32:49.110877 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.140641 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerID="e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c" exitCode=0 Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.140839 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.151706 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"0c2ac46d1e93a41bc414cb4514124fb328356ae9f1b768ba585227d541f6220e"} Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.154836 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1" exitCode=0 Jan 27 18:32:50 crc kubenswrapper[4907]: I0127 18:32:50.154869 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.172087 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.179068 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerStarted","Data":"b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.180090 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.184525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.191953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"8e1f96bb050bc5d4264454553d68034e151e1a7adb65b2bc4cfa829c4501075e"} Jan 27 18:32:51 crc kubenswrapper[4907]: I0127 18:32:51.263978 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" podStartSLOduration=13.263960604 podStartE2EDuration="13.263960604s" podCreationTimestamp="2026-01-27 18:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:51.254377552 +0000 UTC m=+1626.383660164" watchObservedRunningTime="2026-01-27 18:32:51.263960604 +0000 UTC m=+1626.393243216" Jan 27 18:32:52 crc kubenswrapper[4907]: I0127 18:32:52.205058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1"} Jan 27 18:32:53 crc kubenswrapper[4907]: E0127 18:32:53.446860 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.236174 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"570eb6afe3191636018d46ec4cf448c4203df83da283c7465c362786dd4332f2"} Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.236547 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 18:32:54 crc kubenswrapper[4907]: E0127 18:32:54.238695 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.242037 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1" exitCode=0 Jan 27 18:32:54 crc kubenswrapper[4907]: I0127 18:32:54.242085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1"} Jan 27 18:32:55 crc kubenswrapper[4907]: E0127 18:32:55.254207 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" Jan 27 18:32:56 crc kubenswrapper[4907]: I0127 18:32:56.264778 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerStarted","Data":"9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16"} Jan 27 18:32:56 crc kubenswrapper[4907]: I0127 18:32:56.292808 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f75nb" podStartSLOduration=11.19626883 podStartE2EDuration="16.292791177s" podCreationTimestamp="2026-01-27 18:32:40 +0000 UTC" firstStartedPulling="2026-01-27 18:32:50.158464937 +0000 UTC m=+1625.287747549" lastFinishedPulling="2026-01-27 18:32:55.254987284 +0000 UTC m=+1630.384269896" observedRunningTime="2026-01-27 18:32:56.281000542 +0000 UTC m=+1631.410283164" watchObservedRunningTime="2026-01-27 18:32:56.292791177 +0000 UTC m=+1631.422073789" Jan 27 18:32:58 crc kubenswrapper[4907]: I0127 18:32:58.921716 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.026414 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.027695 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" containerID="cri-o://5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" gracePeriod=10 Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.577432 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.579781 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.597885 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598093 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598114 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598138 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.598185 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.611296 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699746 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699826 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.699870 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700731 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700773 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700819 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.700890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701019 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701395 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.701574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.702155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-config\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.702617 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.748597 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4vg\" (UniqueName: \"kubernetes.io/projected/5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e-kube-api-access-sr4vg\") pod \"dnsmasq-dns-5596c69fcc-hhml4\" (UID: \"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e\") " pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:32:59 crc kubenswrapper[4907]: I0127 18:32:59.910243 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.314946 4907 generic.go:334] "Generic (PLEG): container finished" podID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerID="5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" exitCode=0 Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.315059 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900"} Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.494893 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hhml4"] Jan 27 18:33:00 crc kubenswrapper[4907]: W0127 18:33:00.502951 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf7b3c3_995f_48f8_a74f_3ffaf08f6d1e.slice/crio-d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc WatchSource:0}: Error finding container d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc: Status 404 returned error can't find the container with id d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.624927 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.624989 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.688338 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.728240 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.748376 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:00 crc kubenswrapper[4907]: E0127 18:33:00.748714 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827305 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827388 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827495 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827869 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827901 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.827923 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") pod \"2a5f060b-75dd-4083-badf-a9d208f59b65\" (UID: \"2a5f060b-75dd-4083-badf-a9d208f59b65\") " Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.835084 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m" (OuterVolumeSpecName: "kube-api-access-wj74m") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "kube-api-access-wj74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.901031 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.901999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.906094 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.907776 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931155 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj74m\" (UniqueName: \"kubernetes.io/projected/2a5f060b-75dd-4083-badf-a9d208f59b65-kube-api-access-wj74m\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931188 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931198 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931206 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.931217 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:00 crc kubenswrapper[4907]: I0127 18:33:00.947533 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config" (OuterVolumeSpecName: "config") pod "2a5f060b-75dd-4083-badf-a9d208f59b65" (UID: "2a5f060b-75dd-4083-badf-a9d208f59b65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.033854 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5f060b-75dd-4083-badf-a9d208f59b65-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328108 4907 generic.go:334] "Generic (PLEG): container finished" podID="5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e" containerID="31a2062688fcfc39ec1785f0eb116e327ef0432830cedbbd68c1c695d6bb9644" exitCode=0 Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerDied","Data":"31a2062688fcfc39ec1785f0eb116e327ef0432830cedbbd68c1c695d6bb9644"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.328218 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerStarted","Data":"d9a3490b17793bb4e7ab616cb3d40072de83fe5708c796fda986d13336282bfc"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330806 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330829 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-rqfpj" event={"ID":"2a5f060b-75dd-4083-badf-a9d208f59b65","Type":"ContainerDied","Data":"4bd8eb5f48ea3f38d33f0dd542b84168a28c90547f3c08b18a3dbbf20455e507"} Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.330883 4907 scope.go:117] "RemoveContainer" containerID="5da906835235118c6e1fd88133f8ad0821d70a4ef6cd33bf22120c41b608b900" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.379494 4907 scope.go:117] "RemoveContainer" containerID="f064d23f4fac689b5a994e7c87e0b8620a4d34af790c875556eda8d9fe99678c" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.386547 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.397488 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-rqfpj"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.411784 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.464419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:01 crc kubenswrapper[4907]: I0127 18:33:01.845167 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" path="/var/lib/kubelet/pods/2a5f060b-75dd-4083-badf-a9d208f59b65/volumes" Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.344209 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" event={"ID":"5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e","Type":"ContainerStarted","Data":"535db4e83b43e189c59263127328f7feec73639c2d150d1546dad635ff9ed5c3"} Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.344535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:02 crc kubenswrapper[4907]: I0127 18:33:02.380735 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" podStartSLOduration=3.380715607 podStartE2EDuration="3.380715607s" podCreationTimestamp="2026-01-27 18:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:02.369517259 +0000 UTC m=+1637.498799871" watchObservedRunningTime="2026-01-27 18:33:02.380715607 +0000 UTC m=+1637.509998219" Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.362413 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerStarted","Data":"fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490"} Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.362772 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f75nb" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" containerID="cri-o://9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" gracePeriod=2 Jan 27 18:33:03 crc kubenswrapper[4907]: I0127 18:33:03.419280 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6xh4v" podStartSLOduration=2.530614913 podStartE2EDuration="46.419259111s" podCreationTimestamp="2026-01-27 18:32:17 +0000 UTC" firstStartedPulling="2026-01-27 18:32:18.170578403 +0000 UTC m=+1593.299861015" lastFinishedPulling="2026-01-27 18:33:02.059222581 +0000 UTC m=+1637.188505213" observedRunningTime="2026-01-27 18:33:03.410291456 +0000 UTC m=+1638.539574068" watchObservedRunningTime="2026-01-27 18:33:03.419259111 +0000 UTC m=+1638.548541723" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.380521 4907 generic.go:334] "Generic (PLEG): container finished" podID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerID="9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" exitCode=0 Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.380671 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16"} Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.605436 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.753698 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754034 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754192 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") pod \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\" (UID: \"189c0f02-da43-4eb5-9cf1-ff9154e1a952\") " Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.754765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities" (OuterVolumeSpecName: "utilities") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.755240 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.766953 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq" (OuterVolumeSpecName: "kube-api-access-7vfkq") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "kube-api-access-7vfkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.799306 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "189c0f02-da43-4eb5-9cf1-ff9154e1a952" (UID: "189c0f02-da43-4eb5-9cf1-ff9154e1a952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.857686 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/189c0f02-da43-4eb5-9cf1-ff9154e1a952-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:04 crc kubenswrapper[4907]: I0127 18:33:04.858548 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfkq\" (UniqueName: \"kubernetes.io/projected/189c0f02-da43-4eb5-9cf1-ff9154e1a952-kube-api-access-7vfkq\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400017 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f75nb" event={"ID":"189c0f02-da43-4eb5-9cf1-ff9154e1a952","Type":"ContainerDied","Data":"7c26280363be166379c041f79fa49e1cba31a92bb3f77d3485de0950431f2de4"} Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400107 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f75nb" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.400112 4907 scope.go:117] "RemoveContainer" containerID="9be11158476d00ea37f53088ab02a3d4f06f2501f52b453473515af8af2a3f16" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.401997 4907 generic.go:334] "Generic (PLEG): container finished" podID="a67fd41b-79b0-4ab4-86b6-816389597620" containerID="fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490" exitCode=0 Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.402074 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerDied","Data":"fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490"} Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.451164 4907 scope.go:117] "RemoveContainer" containerID="12193fac6bdbc284c0df0311fcec63d63f4af964fa52f71afe109f0a8def08e1" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.466638 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.480061 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f75nb"] Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.482270 4907 scope.go:117] "RemoveContainer" containerID="245e56ea7e608b04825e7645b5a948e1629cf487574bdb62fd0d1a074cb20da1" Jan 27 18:33:05 crc kubenswrapper[4907]: I0127 18:33:05.768309 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" path="/var/lib/kubelet/pods/189c0f02-da43-4eb5-9cf1-ff9154e1a952/volumes" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.100687 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.230463 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.231800 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.231973 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") pod \"a67fd41b-79b0-4ab4-86b6-816389597620\" (UID: \"a67fd41b-79b0-4ab4-86b6-816389597620\") " Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.235922 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr" (OuterVolumeSpecName: "kube-api-access-hsgzr") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "kube-api-access-hsgzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.264978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.322653 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data" (OuterVolumeSpecName: "config-data") pod "a67fd41b-79b0-4ab4-86b6-816389597620" (UID: "a67fd41b-79b0-4ab4-86b6-816389597620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335111 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335149 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67fd41b-79b0-4ab4-86b6-816389597620-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.335161 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsgzr\" (UniqueName: \"kubernetes.io/projected/a67fd41b-79b0-4ab4-86b6-816389597620-kube-api-access-hsgzr\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432700 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6xh4v" event={"ID":"a67fd41b-79b0-4ab4-86b6-816389597620","Type":"ContainerDied","Data":"05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b"} Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432747 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05271780c2c58fb11a1e6317931f7b2ef2d5aa985e73775fc4e3c4ba9a95671b" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.432777 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6xh4v" Jan 27 18:33:07 crc kubenswrapper[4907]: I0127 18:33:07.782261 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 18:33:08 crc kubenswrapper[4907]: I0127 18:33:08.447319 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} Jan 27 18:33:08 crc kubenswrapper[4907]: I0127 18:33:08.475465 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.223742032 podStartE2EDuration="47.475446081s" podCreationTimestamp="2026-01-27 18:32:21 +0000 UTC" firstStartedPulling="2026-01-27 18:32:22.791282252 +0000 UTC m=+1597.920564864" lastFinishedPulling="2026-01-27 18:33:08.042986281 +0000 UTC m=+1643.172268913" observedRunningTime="2026-01-27 18:33:08.470894662 +0000 UTC m=+1643.600177334" watchObservedRunningTime="2026-01-27 18:33:08.475446081 +0000 UTC m=+1643.604728703" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.208307 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209214 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="init" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209234 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="init" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209251 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-content" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209260 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-content" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209269 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209276 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209288 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209294 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-utilities" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209313 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="extract-utilities" Jan 27 18:33:09 crc kubenswrapper[4907]: E0127 18:33:09.209325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209330 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209622 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="189c0f02-da43-4eb5-9cf1-ff9154e1a952" containerName="registry-server" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209661 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f060b-75dd-4083-badf-a9d208f59b65" containerName="dnsmasq-dns" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.209672 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" containerName="heat-db-sync" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.210548 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.224401 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.239847 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.242851 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.276442 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292165 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292243 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292295 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292357 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292395 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292482 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292516 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.292530 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.345600 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.347141 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.360861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405077 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405168 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405314 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.405372 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409261 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409425 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409467 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409549 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409641 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409746 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409810 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.409922 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.420262 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-public-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.420275 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data-custom\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.421104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-internal-tls-certs\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.421818 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.422235 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-combined-ca-bundle\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.436173 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d9243a-0abc-40ce-9881-eef907bdafe3-config-data\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.437807 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-config-data-custom\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.438626 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2540a9-525b-46c6-b0ae-23e163484c98-combined-ca-bundle\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.439781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlrp\" (UniqueName: \"kubernetes.io/projected/0d2540a9-525b-46c6-b0ae-23e163484c98-kube-api-access-svlrp\") pod \"heat-engine-668f78b-db9cs\" (UID: \"0d2540a9-525b-46c6-b0ae-23e163484c98\") " pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.439819 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvjx\" (UniqueName: \"kubernetes.io/projected/14d9243a-0abc-40ce-9881-eef907bdafe3-kube-api-access-nvvjx\") pod \"heat-api-7b8679c4d-pw2cq\" (UID: \"14d9243a-0abc-40ce-9881-eef907bdafe3\") " pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512818 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512889 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512963 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.512997 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.513064 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.513121 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.519020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.519861 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-config-data-custom\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.523320 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-combined-ca-bundle\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.523403 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-internal-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.530725 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.531824 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/effdf66a-d041-45e1-a1f0-bd1367a2d80a-public-tls-certs\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.536593 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2prb\" (UniqueName: \"kubernetes.io/projected/effdf66a-d041-45e1-a1f0-bd1367a2d80a-kube-api-access-n2prb\") pod \"heat-cfnapi-96749fcd4-hh92n\" (UID: \"effdf66a-d041-45e1-a1f0-bd1367a2d80a\") " pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.564372 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.672884 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.914144 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-hhml4" Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.981473 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:09 crc kubenswrapper[4907]: I0127 18:33:09.981794 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" containerID="cri-o://b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" gracePeriod=10 Jan 27 18:33:10 crc kubenswrapper[4907]: W0127 18:33:10.184140 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2540a9_525b_46c6_b0ae_23e163484c98.slice/crio-ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b WatchSource:0}: Error finding container ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b: Status 404 returned error can't find the container with id ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.191416 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-668f78b-db9cs"] Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.509280 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7b8679c4d-pw2cq"] Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.532753 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerID="b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" exitCode=0 Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.532857 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a"} Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.548614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-668f78b-db9cs" event={"ID":"0d2540a9-525b-46c6-b0ae-23e163484c98","Type":"ContainerStarted","Data":"ba0c4f28a883dccd52495d8e211334814430cc69ca18f3cb1936304cac52319b"} Jan 27 18:33:10 crc kubenswrapper[4907]: I0127 18:33:10.718417 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-96749fcd4-hh92n"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.003320 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086817 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086858 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.086931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087018 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087057 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.087135 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") pod \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\" (UID: \"f5364919-e030-4b8d-a22d-708b6c7bd0cb\") " Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.096374 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw" (OuterVolumeSpecName: "kube-api-access-4bzqw") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "kube-api-access-4bzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.191580 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzqw\" (UniqueName: \"kubernetes.io/projected/f5364919-e030-4b8d-a22d-708b6c7bd0cb-kube-api-access-4bzqw\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.191875 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.193257 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.213172 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.222762 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.224145 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.245077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config" (OuterVolumeSpecName: "config") pod "f5364919-e030-4b8d-a22d-708b6c7bd0cb" (UID: "f5364919-e030-4b8d-a22d-708b6c7bd0cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293944 4907 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293986 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.293999 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294010 4907 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294026 4907 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.294037 4907 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5364919-e030-4b8d-a22d-708b6c7bd0cb-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.563821 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-96749fcd4-hh92n" event={"ID":"effdf66a-d041-45e1-a1f0-bd1367a2d80a","Type":"ContainerStarted","Data":"929bf63f3810d7b9ddf81d81e0065ce2e51f867bef5434e751b2fa7893cb4152"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567540 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" event={"ID":"f5364919-e030-4b8d-a22d-708b6c7bd0cb","Type":"ContainerDied","Data":"b20e731040e42af751cf6bb4ab1aa4206ff247ad3654db6a8645c821c97c43b3"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567620 4907 scope.go:117] "RemoveContainer" containerID="b250a8025f6304fc38d65bd406dc7fe5603770c18aba24c85b4ed2fa0ca48c1a" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.567820 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-7hqh2" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.575225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-668f78b-db9cs" event={"ID":"0d2540a9-525b-46c6-b0ae-23e163484c98","Type":"ContainerStarted","Data":"3790edf5f6d66ff4eae110856a26bda957351b7ab3e5d82a518f0570f5fa97ef"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.575863 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.577780 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7b8679c4d-pw2cq" event={"ID":"14d9243a-0abc-40ce-9881-eef907bdafe3","Type":"ContainerStarted","Data":"47981bc5ae9c3b1f065892d0b6ab60013463ce2b0a916d4c859a64ae3368693c"} Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.611777 4907 scope.go:117] "RemoveContainer" containerID="e0a7e3e26185beb42cdcfc251cf0f5dc0ceaef0d5dea2938745dc73ee83d830c" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.613151 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-668f78b-db9cs" podStartSLOduration=2.6131333 podStartE2EDuration="2.6131333s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:11.597616539 +0000 UTC m=+1646.726899161" watchObservedRunningTime="2026-01-27 18:33:11.6131333 +0000 UTC m=+1646.742415902" Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.661117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.686502 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-7hqh2"] Jan 27 18:33:11 crc kubenswrapper[4907]: I0127 18:33:11.774819 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" path="/var/lib/kubelet/pods/f5364919-e030-4b8d-a22d-708b6c7bd0cb/volumes" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.629614 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7b8679c4d-pw2cq" event={"ID":"14d9243a-0abc-40ce-9881-eef907bdafe3","Type":"ContainerStarted","Data":"8c03bc940a22e4e9cd8893ab9e72aa3f6028129f8e3d8bd16fa9e93afc489f39"} Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.630131 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.633795 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-96749fcd4-hh92n" event={"ID":"effdf66a-d041-45e1-a1f0-bd1367a2d80a","Type":"ContainerStarted","Data":"501033acab0b35e726127b267bc195027fe219a33b78e00ba15fec516510908a"} Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.633942 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.675083 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7b8679c4d-pw2cq" podStartSLOduration=2.493688766 podStartE2EDuration="4.675062717s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="2026-01-27 18:33:10.543172794 +0000 UTC m=+1645.672455406" lastFinishedPulling="2026-01-27 18:33:12.724546745 +0000 UTC m=+1647.853829357" observedRunningTime="2026-01-27 18:33:13.66883547 +0000 UTC m=+1648.798118082" watchObservedRunningTime="2026-01-27 18:33:13.675062717 +0000 UTC m=+1648.804345329" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.698108 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-96749fcd4-hh92n" podStartSLOduration=2.710636772 podStartE2EDuration="4.698087992s" podCreationTimestamp="2026-01-27 18:33:09 +0000 UTC" firstStartedPulling="2026-01-27 18:33:10.74046062 +0000 UTC m=+1645.869743232" lastFinishedPulling="2026-01-27 18:33:12.72791184 +0000 UTC m=+1647.857194452" observedRunningTime="2026-01-27 18:33:13.691506475 +0000 UTC m=+1648.820789087" watchObservedRunningTime="2026-01-27 18:33:13.698087992 +0000 UTC m=+1648.827370604" Jan 27 18:33:13 crc kubenswrapper[4907]: I0127 18:33:13.748824 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:13 crc kubenswrapper[4907]: E0127 18:33:13.749277 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.318264 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7b8679c4d-pw2cq" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.398777 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.399013 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-667f9867c-2tvqc" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" containerID="cri-o://7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" gracePeriod=60 Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.558786 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-96749fcd4-hh92n" Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.630460 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:21 crc kubenswrapper[4907]: I0127 18:33:21.630735 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" containerID="cri-o://2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" gracePeriod=60 Jan 27 18:33:22 crc kubenswrapper[4907]: I0127 18:33:22.740769 4907 generic.go:334] "Generic (PLEG): container finished" podID="5f8e936e-82a6-49cc-bb09-d247a2d0e47b" containerID="50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702" exitCode=0 Jan 27 18:33:22 crc kubenswrapper[4907]: I0127 18:33:22.740825 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerDied","Data":"50f55f0c0b4a989d807726928ab2d56581879267e991795d560d13a89d68b702"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.772804 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"5f8e936e-82a6-49cc-bb09-d247a2d0e47b","Type":"ContainerStarted","Data":"513a21ffd0f39e6cc4dfae09c542ebb5143c73fcdc3dcc16edea83c1cd58a7f4"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.773434 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.776337 4907 generic.go:334] "Generic (PLEG): container finished" podID="021272d4-b660-4c16-b9a6-befd84abe2cc" containerID="befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5" exitCode=0 Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.776382 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerDied","Data":"befd9365abfe2e65e5f9cdedac175feb33a24273b6a8cede89305220df15b5d5"} Jan 27 18:33:23 crc kubenswrapper[4907]: I0127 18:33:23.825263 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.825240871 podStartE2EDuration="42.825240871s" podCreationTimestamp="2026-01-27 18:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:23.809198575 +0000 UTC m=+1658.938481187" watchObservedRunningTime="2026-01-27 18:33:23.825240871 +0000 UTC m=+1658.954523473" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.792934 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"021272d4-b660-4c16-b9a6-befd84abe2cc","Type":"ContainerStarted","Data":"34c310d50042bf2bf53bdfbfbfbebf3fc3c04eb3f90e2d1cc6453ecb10918aa6"} Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.794630 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.850086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.224:8000/healthcheck\": read tcp 10.217.0.2:53748->10.217.0.224:8000: read: connection reset by peer" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.854106 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.85408834 podStartE2EDuration="43.85408834s" podCreationTimestamp="2026-01-27 18:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:24.852758002 +0000 UTC m=+1659.982040644" watchObservedRunningTime="2026-01-27 18:33:24.85408834 +0000 UTC m=+1659.983370952" Jan 27 18:33:24 crc kubenswrapper[4907]: I0127 18:33:24.905888 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-667f9867c-2tvqc" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.223:8004/healthcheck\": read tcp 10.217.0.2:50236->10.217.0.223:8004: read: connection reset by peer" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826107 4907 generic.go:334] "Generic (PLEG): container finished" podID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerID="7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" exitCode=0 Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerDied","Data":"7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667f9867c-2tvqc" event={"ID":"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15","Type":"ContainerDied","Data":"982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.826655 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982750ecf1d92da3b9717ddf32bec4e3216a8b464d7df0c13f157bfd3020e7bb" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832704 4907 generic.go:334] "Generic (PLEG): container finished" podID="97762448-336d-4609-a574-310d1b61aa04" containerID="2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" exitCode=0 Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832790 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerDied","Data":"2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" event={"ID":"97762448-336d-4609-a574-310d1b61aa04","Type":"ContainerDied","Data":"932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a"} Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.832849 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932bc826c2156d8a545c997d012284767107290f98ea3a005ea9a94b6a995a9a" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.847725 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.851255 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.994868 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995035 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995082 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995182 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995821 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995860 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995936 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.995963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") pod \"97762448-336d-4609-a574-310d1b61aa04\" (UID: \"97762448-336d-4609-a574-310d1b61aa04\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996079 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:25 crc kubenswrapper[4907]: I0127 18:33:25.996103 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") pod \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\" (UID: \"e3fa0e34-41f1-4d79-a10c-0ec6d4250e15\") " Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.001842 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.008211 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.013735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l" (OuterVolumeSpecName: "kube-api-access-5928l") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "kube-api-access-5928l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.020948 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t" (OuterVolumeSpecName: "kube-api-access-5nq4t") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "kube-api-access-5nq4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.048429 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099456 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nq4t\" (UniqueName: \"kubernetes.io/projected/97762448-336d-4609-a574-310d1b61aa04-kube-api-access-5nq4t\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099488 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099499 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5928l\" (UniqueName: \"kubernetes.io/projected/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-kube-api-access-5928l\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099508 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.099516 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.162958 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data" (OuterVolumeSpecName: "config-data") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.167984 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.184666 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.201725 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data" (OuterVolumeSpecName: "config-data") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.201926 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" (UID: "e3fa0e34-41f1-4d79-a10c-0ec6d4250e15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203610 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203638 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203649 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203660 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.203672 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.208716 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.214122 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97762448-336d-4609-a574-310d1b61aa04" (UID: "97762448-336d-4609-a574-310d1b61aa04"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.305281 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.305321 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97762448-336d-4609-a574-310d1b61aa04-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.748699 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:26 crc kubenswrapper[4907]: E0127 18:33:26.749360 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.845724 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667f9867c-2tvqc" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.845731 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b8c4994cf-k8h5g" Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.900373 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.914537 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b8c4994cf-k8h5g"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.935023 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:26 crc kubenswrapper[4907]: I0127 18:33:26.939241 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-667f9867c-2tvqc"] Jan 27 18:33:27 crc kubenswrapper[4907]: I0127 18:33:27.765537 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97762448-336d-4609-a574-310d1b61aa04" path="/var/lib/kubelet/pods/97762448-336d-4609-a574-310d1b61aa04/volumes" Jan 27 18:33:27 crc kubenswrapper[4907]: I0127 18:33:27.766260 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" path="/var/lib/kubelet/pods/e3fa0e34-41f1-4d79-a10c-0ec6d4250e15/volumes" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.587955 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-668f78b-db9cs" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.644261 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.644764 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" containerID="cri-o://f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" gracePeriod=60 Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.692802 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693325 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="init" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693343 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="init" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693377 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693385 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693417 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693424 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: E0127 18:33:29.693439 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693445 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693715 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="97762448-336d-4609-a574-310d1b61aa04" containerName="heat-cfnapi" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693739 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5364919-e030-4b8d-a22d-708b6c7bd0cb" containerName="dnsmasq-dns" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.693759 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fa0e34-41f1-4d79-a10c-0ec6d4250e15" containerName="heat-api" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.694757 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700282 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700601 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700730 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.700771 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.704987 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792842 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.792931 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895575 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895603 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.895747 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.901323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.903977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.907163 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:29 crc kubenswrapper[4907]: I0127 18:33:29.928420 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:30 crc kubenswrapper[4907]: I0127 18:33:30.018285 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.671016 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.683244 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.684998 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:30 crc kubenswrapper[4907]: E0127 18:33:30.685051 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:33:31 crc kubenswrapper[4907]: I0127 18:33:31.067606 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4"] Jan 27 18:33:31 crc kubenswrapper[4907]: I0127 18:33:31.912058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerStarted","Data":"d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977"} Jan 27 18:33:35 crc kubenswrapper[4907]: I0127 18:33:35.271812 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6d47577fc9-fz5kg" podUID="bfb5201d-eb44-42cb-a5ab-49520cc1e741" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.439183 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.454243 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-lvm8r"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.536927 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.538543 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.542437 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.569814 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607665 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607818 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.607845 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.608046 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.709965 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710005 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.710208 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.716135 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.719473 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.729685 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.733049 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"aodh-db-sync-zhncj\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.778458 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16f7a68-05a6-494f-94ce-1774118b0592" path="/var/lib/kubelet/pods/c16f7a68-05a6-494f-94ce-1774118b0592/volumes" Jan 27 18:33:37 crc kubenswrapper[4907]: I0127 18:33:37.881096 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.659994 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.661808 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.663015 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.663075 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-575dc845-lv7nr" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:33:40 crc kubenswrapper[4907]: I0127 18:33:40.749702 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:40 crc kubenswrapper[4907]: E0127 18:33:40.750103 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.481965 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.724764 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:33:41 crc kubenswrapper[4907]: I0127 18:33:41.740495 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:33:44 crc kubenswrapper[4907]: I0127 18:33:44.086966 4907 generic.go:334] "Generic (PLEG): container finished" podID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" exitCode=0 Jan 27 18:33:44 crc kubenswrapper[4907]: I0127 18:33:44.087092 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerDied","Data":"f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.010620 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:33:46 crc kubenswrapper[4907]: W0127 18:33:46.012258 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee2938a8_fe59_4c5a_abd0_7957ecb6b796.slice/crio-86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944 WatchSource:0}: Error finding container 86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944: Status 404 returned error can't find the container with id 86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944 Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.054305 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.110564 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-575dc845-lv7nr" event={"ID":"51ff4a9d-d39e-4357-a248-4b93e5eeaf13","Type":"ContainerDied","Data":"04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.110633 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b2490b34471bde3da133012b6b62ccc9d41cf3e6a16b1fd242cf158ae8c1e2" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.111817 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerStarted","Data":"86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944"} Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.241073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348116 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348239 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348527 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.348712 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") pod \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\" (UID: \"51ff4a9d-d39e-4357-a248-4b93e5eeaf13\") " Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.353529 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.361902 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv" (OuterVolumeSpecName: "kube-api-access-7r6bv") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "kube-api-access-7r6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.394137 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.418253 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data" (OuterVolumeSpecName: "config-data") pod "51ff4a9d-d39e-4357-a248-4b93e5eeaf13" (UID: "51ff4a9d-d39e-4357-a248-4b93e5eeaf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451919 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451965 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451977 4907 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:46 crc kubenswrapper[4907]: I0127 18:33:46.451990 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r6bv\" (UniqueName: \"kubernetes.io/projected/51ff4a9d-d39e-4357-a248-4b93e5eeaf13-kube-api-access-7r6bv\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.126474 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-575dc845-lv7nr" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.126487 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerStarted","Data":"53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129"} Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.155248 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" podStartSLOduration=3.170996426 podStartE2EDuration="18.155224067s" podCreationTimestamp="2026-01-27 18:33:29 +0000 UTC" firstStartedPulling="2026-01-27 18:33:31.066733025 +0000 UTC m=+1666.196015637" lastFinishedPulling="2026-01-27 18:33:46.050960676 +0000 UTC m=+1681.180243278" observedRunningTime="2026-01-27 18:33:47.145905452 +0000 UTC m=+1682.275188134" watchObservedRunningTime="2026-01-27 18:33:47.155224067 +0000 UTC m=+1682.284506689" Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.188425 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.214391 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-575dc845-lv7nr"] Jan 27 18:33:47 crc kubenswrapper[4907]: I0127 18:33:47.763547 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" path="/var/lib/kubelet/pods/51ff4a9d-d39e-4357-a248-4b93e5eeaf13/volumes" Jan 27 18:33:50 crc kubenswrapper[4907]: I0127 18:33:50.273470 4907 scope.go:117] "RemoveContainer" containerID="9c8b4c0110be5f64f9312aa5e05b1c554859d60683e6ece65a511961809093cd" Jan 27 18:33:51 crc kubenswrapper[4907]: I0127 18:33:51.311512 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" containerID="cri-o://4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" gracePeriod=604791 Jan 27 18:33:51 crc kubenswrapper[4907]: I0127 18:33:51.997083 4907 scope.go:117] "RemoveContainer" containerID="16729300b105c848b87da536ab581fbf0466941c7a08dd9bcf81bc9c3e1432ed" Jan 27 18:33:52 crc kubenswrapper[4907]: I0127 18:33:52.748282 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:33:52 crc kubenswrapper[4907]: E0127 18:33:52.748917 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.096761 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.749557 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.749818 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 18:33:53 crc kubenswrapper[4907]: I0127 18:33:53.756871 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.045328 4907 scope.go:117] "RemoveContainer" containerID="3bbc9b483b2ac3711ce029100cb12ceb3f91e479b6591235f5a6fbedf804d371" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.080708 4907 scope.go:117] "RemoveContainer" containerID="eab4549235d783c996004e82b23c0b9ceeeb842b079328aaaef5456cb5dca61b" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.154862 4907 scope.go:117] "RemoveContainer" containerID="aca7542bafc6f8a501bc005b4af4e8a5df758a4f8de58c5b60071b0c8be6107f" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.321042 4907 scope.go:117] "RemoveContainer" containerID="dcc95c68db7e4c6905571aec9659bfdb1013209939bc19b063e4a30e66ce2619" Jan 27 18:33:54 crc kubenswrapper[4907]: I0127 18:33:54.327477 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:33:55 crc kubenswrapper[4907]: I0127 18:33:55.260520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerStarted","Data":"71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918"} Jan 27 18:33:55 crc kubenswrapper[4907]: I0127 18:33:55.289159 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zhncj" podStartSLOduration=9.982930682 podStartE2EDuration="18.289142024s" podCreationTimestamp="2026-01-27 18:33:37 +0000 UTC" firstStartedPulling="2026-01-27 18:33:46.01661674 +0000 UTC m=+1681.145899352" lastFinishedPulling="2026-01-27 18:33:54.322828082 +0000 UTC m=+1689.452110694" observedRunningTime="2026-01-27 18:33:55.276309759 +0000 UTC m=+1690.405592381" watchObservedRunningTime="2026-01-27 18:33:55.289142024 +0000 UTC m=+1690.418424636" Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.371721 4907 generic.go:334] "Generic (PLEG): container finished" podID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerID="4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" exitCode=0 Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.371827 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759"} Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.374788 4907 generic.go:334] "Generic (PLEG): container finished" podID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerID="53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129" exitCode=0 Jan 27 18:33:59 crc kubenswrapper[4907]: I0127 18:33:59.374834 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerDied","Data":"53a4fbe6c4402dd00ad7adf4741e15c2cd063e7fa6f6cc532a14e9f28ea22129"} Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.395759 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"45d050d2-eeb4-4603-a6c4-1cbdd454ea35","Type":"ContainerDied","Data":"0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea"} Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.396778 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0484819ef66526692fd2b3dc5a8591e97aabacdddd5e6ecdeab067ea068207ea" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.479546 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504545 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504790 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504886 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504915 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.504957 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505006 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505060 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505081 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.505171 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.515331 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.525410 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.526396 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.526595 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") pod \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\" (UID: \"45d050d2-eeb4-4603-a6c4-1cbdd454ea35\") " Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.528029 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.528061 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.532172 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.536654 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info" (OuterVolumeSpecName: "pod-info") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.538822 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.548453 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp" (OuterVolumeSpecName: "kube-api-access-b8hgp") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "kube-api-access-b8hgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.552695 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633335 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633372 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633384 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633396 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.633411 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8hgp\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-kube-api-access-b8hgp\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.686065 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data" (OuterVolumeSpecName: "config-data") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.723328 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf" (OuterVolumeSpecName: "server-conf") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.735762 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.735794 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.813801 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.840424 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d050d2-eeb4-4603-a6c4-1cbdd454ea35-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.902038 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0" (OuterVolumeSpecName: "persistence") pod "45d050d2-eeb4-4603-a6c4-1cbdd454ea35" (UID: "45d050d2-eeb4-4603-a6c4-1cbdd454ea35"). InnerVolumeSpecName "pvc-49bee2ea-921f-42f7-b022-927bee51e4f0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:34:00 crc kubenswrapper[4907]: I0127 18:34:00.943458 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") on node \"crc\" " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.012042 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.012844 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49bee2ea-921f-42f7-b022-927bee51e4f0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0") on node "crc" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.045164 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.286285 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351419 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351458 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.351651 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") pod \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\" (UID: \"de193c6b-eba4-4eb3-95c4-0d7fe875691f\") " Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.358728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.358888 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj" (OuterVolumeSpecName: "kube-api-access-c9wdj") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "kube-api-access-c9wdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.403182 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory" (OuterVolumeSpecName: "inventory") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.408669 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de193c6b-eba4-4eb3-95c4-0d7fe875691f" (UID: "de193c6b-eba4-4eb3-95c4-0d7fe875691f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.419296 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.420015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.424886 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4" event={"ID":"de193c6b-eba4-4eb3-95c4-0d7fe875691f","Type":"ContainerDied","Data":"d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977"} Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.424930 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92515b9d06346fb3e0c12da1fccec05a0315bad218f3dfd7f1dfe6fa7a5f977" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454337 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wdj\" (UniqueName: \"kubernetes.io/projected/de193c6b-eba4-4eb3-95c4-0d7fe875691f-kube-api-access-c9wdj\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454368 4907 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454379 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.454391 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de193c6b-eba4-4eb3-95c4-0d7fe875691f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.511300 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.549833 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.566757 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567274 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567290 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567318 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567325 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567340 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567346 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: E0127 18:34:01.567363 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="setup-container" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567370 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="setup-container" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567609 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567629 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ff4a9d-d39e-4357-a248-4b93e5eeaf13" containerName="heat-engine" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.567648 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="de193c6b-eba4-4eb3-95c4-0d7fe875691f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.568419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.574763 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575231 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575346 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.575486 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.595324 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.610550 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.612546 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.631171 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.657965 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658098 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658220 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658273 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658296 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658353 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658393 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658465 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658488 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658510 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658592 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658624 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.658646 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.760574 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.760911 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761029 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761098 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761527 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.761926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762110 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762300 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762469 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762847 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.762999 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-config-data\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763450 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763881 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.763950 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.764125 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.766514 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0246bb-5533-495d-849f-617b346c8fde-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.767952 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.769324 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" path="/var/lib/kubelet/pods/45d050d2-eeb4-4603-a6c4-1cbdd454ea35/volumes" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.770887 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0246bb-5533-495d-849f-617b346c8fde-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.785207 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.787095 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.787202 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.788618 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.788647 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34bf333d34756f1b83dde2eb30c2397a83048a027d2708516d2de7b96e990e99/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.794517 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0246bb-5533-495d-849f-617b346c8fde-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.795136 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tgbss\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.796223 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfqj\" (UniqueName: \"kubernetes.io/projected/0e0246bb-5533-495d-849f-617b346c8fde-kube-api-access-5rfqj\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.875695 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49bee2ea-921f-42f7-b022-927bee51e4f0\") pod \"rabbitmq-server-1\" (UID: \"0e0246bb-5533-495d-849f-617b346c8fde\") " pod="openstack/rabbitmq-server-1" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.891362 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:01 crc kubenswrapper[4907]: I0127 18:34:01.945453 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 18:34:02 crc kubenswrapper[4907]: W0127 18:34:02.633237 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2872f844_3f1a_4d9b_8f96_5cc01d0cae12.slice/crio-3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8 WatchSource:0}: Error finding container 3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8: Status 404 returned error can't find the container with id 3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8 Jan 27 18:34:02 crc kubenswrapper[4907]: I0127 18:34:02.652937 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 18:34:02 crc kubenswrapper[4907]: I0127 18:34:02.668344 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss"] Jan 27 18:34:03 crc kubenswrapper[4907]: I0127 18:34:03.443180 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"cdbce7184be087e11e87f58ef85e79cb799451ef6f3067b5737634a652ed4b9c"} Jan 27 18:34:03 crc kubenswrapper[4907]: I0127 18:34:03.444873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerStarted","Data":"3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8"} Jan 27 18:34:04 crc kubenswrapper[4907]: I0127 18:34:04.749402 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:04 crc kubenswrapper[4907]: E0127 18:34:04.750108 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:04 crc kubenswrapper[4907]: I0127 18:34:04.927418 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="45d050d2-eeb4-4603-a6c4-1cbdd454ea35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.470224 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.473105 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerStarted","Data":"727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.475147 4907 generic.go:334] "Generic (PLEG): container finished" podID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerID="71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918" exitCode=0 Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.475173 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerDied","Data":"71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918"} Jan 27 18:34:05 crc kubenswrapper[4907]: I0127 18:34:05.516425 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" podStartSLOduration=3.468642793 podStartE2EDuration="4.516401809s" podCreationTimestamp="2026-01-27 18:34:01 +0000 UTC" firstStartedPulling="2026-01-27 18:34:02.637228937 +0000 UTC m=+1697.766511549" lastFinishedPulling="2026-01-27 18:34:03.684987953 +0000 UTC m=+1698.814270565" observedRunningTime="2026-01-27 18:34:05.512900229 +0000 UTC m=+1700.642182841" watchObservedRunningTime="2026-01-27 18:34:05.516401809 +0000 UTC m=+1700.645684441" Jan 27 18:34:06 crc kubenswrapper[4907]: I0127 18:34:06.932794 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037497 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037809 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.037836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.038045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") pod \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\" (UID: \"ee2938a8-fe59-4c5a-abd0-7957ecb6b796\") " Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.044802 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc" (OuterVolumeSpecName: "kube-api-access-b7knc") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "kube-api-access-b7knc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.052147 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts" (OuterVolumeSpecName: "scripts") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.073089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.083686 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data" (OuterVolumeSpecName: "config-data") pod "ee2938a8-fe59-4c5a-abd0-7957ecb6b796" (UID: "ee2938a8-fe59-4c5a-abd0-7957ecb6b796"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140946 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140990 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.140999 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7knc\" (UniqueName: \"kubernetes.io/projected/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-kube-api-access-b7knc\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.141009 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2938a8-fe59-4c5a-abd0-7957ecb6b796-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.498912 4907 generic.go:334] "Generic (PLEG): container finished" podID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerID="727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde" exitCode=0 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.498996 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerDied","Data":"727a3fa5e1b5abfdf67af3851789c9f46024817a85e3ad5c07e34cf98ae61fde"} Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.506008 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zhncj" event={"ID":"ee2938a8-fe59-4c5a-abd0-7957ecb6b796","Type":"ContainerDied","Data":"86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944"} Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.506057 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86658babcd009f4c231d1cce98b20246a650ecdc96a2992e945d254d757c8944" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.507015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zhncj" Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.678869 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.679496 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" containerID="cri-o://e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.679671 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" containerID="cri-o://d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.680302 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" containerID="cri-o://5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" gracePeriod=30 Jan 27 18:34:07 crc kubenswrapper[4907]: I0127 18:34:07.680337 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" containerID="cri-o://2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" gracePeriod=30 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521046 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" exitCode=0 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521082 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" exitCode=0 Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521261 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86"} Jan 27 18:34:08 crc kubenswrapper[4907]: I0127 18:34:08.521291 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.068529 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188381 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188829 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.188956 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.196452 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp" (OuterVolumeSpecName: "kube-api-access-k8zkp") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "kube-api-access-k8zkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: E0127 18:34:09.224046 4907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam podName:2872f844-3f1a-4d9b-8f96-5cc01d0cae12 nodeName:}" failed. No retries permitted until 2026-01-27 18:34:09.724014154 +0000 UTC m=+1704.853296776 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12") : error deleting /var/lib/kubelet/pods/2872f844-3f1a-4d9b-8f96-5cc01d0cae12/volume-subpaths: remove /var/lib/kubelet/pods/2872f844-3f1a-4d9b-8f96-5cc01d0cae12/volume-subpaths: no such file or directory Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.227305 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory" (OuterVolumeSpecName: "inventory") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.292073 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.292118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zkp\" (UniqueName: \"kubernetes.io/projected/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-kube-api-access-k8zkp\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.532606 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" event={"ID":"2872f844-3f1a-4d9b-8f96-5cc01d0cae12","Type":"ContainerDied","Data":"3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.533385 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf6f63a946d7e6f725404b742cc37864e8c388bd4d2e7a6fc5cdd0cc9b6f9a8" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.532634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tgbss" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.540965 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" exitCode=0 Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.541039 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad"} Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.804697 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") pod \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\" (UID: \"2872f844-3f1a-4d9b-8f96-5cc01d0cae12\") " Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.820173 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2872f844-3f1a-4d9b-8f96-5cc01d0cae12" (UID: "2872f844-3f1a-4d9b-8f96-5cc01d0cae12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:09 crc kubenswrapper[4907]: I0127 18:34:09.907974 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2872f844-3f1a-4d9b-8f96-5cc01d0cae12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226043 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:10 crc kubenswrapper[4907]: E0127 18:34:10.226712 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226739 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: E0127 18:34:10.226762 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.226771 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.227058 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2872f844-3f1a-4d9b-8f96-5cc01d0cae12" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.227079 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" containerName="aodh-db-sync" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.228161 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231214 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231414 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231244 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.231872 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.237696 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.325897 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.325996 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.326108 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.326251 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.428793 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.428948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.429084 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.429135 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.433247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.433376 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.439000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.448353 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:10 crc kubenswrapper[4907]: I0127 18:34:10.562357 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.178024 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj"] Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.661857 4907 generic.go:334] "Generic (PLEG): container finished" podID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerID="2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" exitCode=0 Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.661968 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff"} Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.682461 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerStarted","Data":"beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6"} Jan 27 18:34:11 crc kubenswrapper[4907]: I0127 18:34:11.994754 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073403 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073432 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073610 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073761 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.073787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") pod \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\" (UID: \"a6c7b40d-63e2-4fbf-a59d-44c106984d76\") " Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.081388 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts" (OuterVolumeSpecName: "scripts") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.086574 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w" (OuterVolumeSpecName: "kube-api-access-n2p9w") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "kube-api-access-n2p9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.152962 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178648 4907 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178683 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2p9w\" (UniqueName: \"kubernetes.io/projected/a6c7b40d-63e2-4fbf-a59d-44c106984d76-kube-api-access-n2p9w\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.178695 4907 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.184277 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.232896 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data" (OuterVolumeSpecName: "config-data") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.250081 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c7b40d-63e2-4fbf-a59d-44c106984d76" (UID: "a6c7b40d-63e2-4fbf-a59d-44c106984d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281236 4907 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281280 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.281292 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c7b40d-63e2-4fbf-a59d-44c106984d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697217 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a6c7b40d-63e2-4fbf-a59d-44c106984d76","Type":"ContainerDied","Data":"5ecc4602ae7879f3b687cadcefcbbf374dc1042900cda4e1ea1d10119888edd8"} Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697273 4907 scope.go:117] "RemoveContainer" containerID="d0ff64ffd6645fee3e3fa95ceff98c0ecb81b4ac75f1f079812bd17806737bad" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.697287 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.708244 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerStarted","Data":"cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c"} Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.743912 4907 scope.go:117] "RemoveContainer" containerID="2b423fdd77ea55fad9d249dd41924a653f901fafa619ba0c32a88c8e47c3ddff" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.748845 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" podStartSLOduration=2.289007297 podStartE2EDuration="2.748800593s" podCreationTimestamp="2026-01-27 18:34:10 +0000 UTC" firstStartedPulling="2026-01-27 18:34:11.19754701 +0000 UTC m=+1706.326829622" lastFinishedPulling="2026-01-27 18:34:11.657340306 +0000 UTC m=+1706.786622918" observedRunningTime="2026-01-27 18:34:12.738391518 +0000 UTC m=+1707.867674130" watchObservedRunningTime="2026-01-27 18:34:12.748800593 +0000 UTC m=+1707.878083205" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.770340 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.777840 4907 scope.go:117] "RemoveContainer" containerID="5c35d61e269c2c0e47646bb926be647ba9621988557f1e7af65617933e13dc86" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.781338 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799470 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799932 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799947 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799966 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.799972 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.799998 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800004 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: E0127 18:34:12.800019 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800026 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800251 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-api" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800287 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-evaluator" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800298 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-notifier" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.800309 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" containerName="aodh-listener" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.802355 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807060 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807302 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xd6ml" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807432 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.807635 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.808435 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.822901 4907 scope.go:117] "RemoveContainer" containerID="e807df6bb34e8270bb99b18c9381629f1e3e316e54629be496e361af378d31fa" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.838659 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894436 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894513 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894549 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894747 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.894867 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.895020 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.996973 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997331 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997375 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997516 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997611 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:12 crc kubenswrapper[4907]: I0127 18:34:12.997713 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003372 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-public-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-scripts\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.003928 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-config-data\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.005171 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-combined-ca-bundle\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.021326 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15bed332-56fa-45cd-8ab4-5d4cced0e671-internal-tls-certs\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.022308 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vq7w\" (UniqueName: \"kubernetes.io/projected/15bed332-56fa-45cd-8ab4-5d4cced0e671-kube-api-access-8vq7w\") pod \"aodh-0\" (UID: \"15bed332-56fa-45cd-8ab4-5d4cced0e671\") " pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.124688 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.627013 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 18:34:13 crc kubenswrapper[4907]: W0127 18:34:13.628793 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15bed332_56fa_45cd_8ab4_5d4cced0e671.slice/crio-25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b WatchSource:0}: Error finding container 25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b: Status 404 returned error can't find the container with id 25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.720903 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"25fbc62960a5df43524998a0c53652b0bc81d89daad266611debd97d763ec86b"} Jan 27 18:34:13 crc kubenswrapper[4907]: I0127 18:34:13.761733 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c7b40d-63e2-4fbf-a59d-44c106984d76" path="/var/lib/kubelet/pods/a6c7b40d-63e2-4fbf-a59d-44c106984d76/volumes" Jan 27 18:34:14 crc kubenswrapper[4907]: I0127 18:34:14.738855 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"f2436e7517e5de93bc206c51de13598ed68ba3e09d8dc335e519ad4419f25ae2"} Jan 27 18:34:15 crc kubenswrapper[4907]: I0127 18:34:15.766248 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"8d2bdac5f867820cd01edaaeaec4c586fafdbbaf53c761b99e3633c45064d2de"} Jan 27 18:34:16 crc kubenswrapper[4907]: I0127 18:34:16.791745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"b16708456ade029178668a362e73bf08cbec6aa9acf892393eb318e6e9616a1d"} Jan 27 18:34:18 crc kubenswrapper[4907]: I0127 18:34:18.816735 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"15bed332-56fa-45cd-8ab4-5d4cced0e671","Type":"ContainerStarted","Data":"618b205d959f06d236345b27bae6b8bdbe1b0c426c556e879eed5b93d4300dad"} Jan 27 18:34:18 crc kubenswrapper[4907]: I0127 18:34:18.854599 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.808798527 podStartE2EDuration="6.854570322s" podCreationTimestamp="2026-01-27 18:34:12 +0000 UTC" firstStartedPulling="2026-01-27 18:34:13.630519451 +0000 UTC m=+1708.759802063" lastFinishedPulling="2026-01-27 18:34:17.676291246 +0000 UTC m=+1712.805573858" observedRunningTime="2026-01-27 18:34:18.83868235 +0000 UTC m=+1713.967964962" watchObservedRunningTime="2026-01-27 18:34:18.854570322 +0000 UTC m=+1713.983852944" Jan 27 18:34:19 crc kubenswrapper[4907]: I0127 18:34:19.748798 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:19 crc kubenswrapper[4907]: E0127 18:34:19.749371 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:32 crc kubenswrapper[4907]: I0127 18:34:32.748020 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:32 crc kubenswrapper[4907]: E0127 18:34:32.748841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:37 crc kubenswrapper[4907]: I0127 18:34:37.036918 4907 generic.go:334] "Generic (PLEG): container finished" podID="0e0246bb-5533-495d-849f-617b346c8fde" containerID="19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d" exitCode=0 Jan 27 18:34:37 crc kubenswrapper[4907]: I0127 18:34:37.037029 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerDied","Data":"19be5fc1c14536ff846acb17420caf9b52b966701db8a7a3cd9d6ef8c854187d"} Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.050544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0e0246bb-5533-495d-849f-617b346c8fde","Type":"ContainerStarted","Data":"b92cf935e7c9be0c4e1f3ca984bf7c162cb346bc3030b781dc5f4c893afd96b9"} Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.051071 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 18:34:38 crc kubenswrapper[4907]: I0127 18:34:38.093178 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.093156345 podStartE2EDuration="37.093156345s" podCreationTimestamp="2026-01-27 18:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:34:38.084026075 +0000 UTC m=+1733.213308687" watchObservedRunningTime="2026-01-27 18:34:38.093156345 +0000 UTC m=+1733.222438957" Jan 27 18:34:47 crc kubenswrapper[4907]: I0127 18:34:47.748725 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:34:47 crc kubenswrapper[4907]: E0127 18:34:47.749379 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:34:51 crc kubenswrapper[4907]: I0127 18:34:51.948720 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 18:34:52 crc kubenswrapper[4907]: I0127 18:34:52.005130 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.769435 4907 scope.go:117] "RemoveContainer" containerID="9e14e3ba528ee447cbbdbc0a37f0975e10855bd00aabc894dc382b32e4dc8e87" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.806190 4907 scope.go:117] "RemoveContainer" containerID="47d2b1818f481f9157351010298e3904201a2d3e7fa436dd0e807a41c1c54a28" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.858189 4907 scope.go:117] "RemoveContainer" containerID="86d08bea6d3c9bed7838ecc53f7ccd3c171b17cb0b7994ed9bfe6c1a1920772f" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.905476 4907 scope.go:117] "RemoveContainer" containerID="28bbb72e623034afdbf128221b10f5cd93ad8bc3e76bd585f307ba4d60b2e87c" Jan 27 18:34:54 crc kubenswrapper[4907]: I0127 18:34:54.934905 4907 scope.go:117] "RemoveContainer" containerID="4770e7fec46f1fc597410163b1386d755696535528a33b999e133fe947c9e759" Jan 27 18:34:55 crc kubenswrapper[4907]: I0127 18:34:55.904892 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" containerID="cri-o://7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" gracePeriod=604797 Jan 27 18:34:59 crc kubenswrapper[4907]: I0127 18:34:59.911829 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 27 18:35:01 crc kubenswrapper[4907]: I0127 18:35:01.748121 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:01 crc kubenswrapper[4907]: E0127 18:35:01.748959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.319910 4907 generic.go:334] "Generic (PLEG): container finished" podID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerID="7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" exitCode=0 Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.320009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1"} Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.644581 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742097 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742147 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742179 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.742204 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743142 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743346 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743454 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743496 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743607 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743637 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.743676 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") pod \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\" (UID: \"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce\") " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.751517 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.757408 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc" (OuterVolumeSpecName: "kube-api-access-qrrgc") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "kube-api-access-qrrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.764505 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.765119 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.765453 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info" (OuterVolumeSpecName: "pod-info") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.778608 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.778999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.810514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data" (OuterVolumeSpecName: "config-data") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.839749 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e" (OuterVolumeSpecName: "persistence") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870722 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870793 4907 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870806 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870824 4907 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870834 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870846 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrrgc\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-kube-api-access-qrrgc\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870858 4907 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870867 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.870941 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") on node \"crc\" " Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.922748 4907 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.922924 4907 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e") on node "crc" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.933909 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf" (OuterVolumeSpecName: "server-conf") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.951606 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" (UID: "f97b2930-64e9-4f53-94b2-a3cbdb6b43ce"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973752 4907 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973810 4907 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:02 crc kubenswrapper[4907]: I0127 18:35:02.973841 4907 reconciler_common.go:293] "Volume detached for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") on node \"crc\" DevicePath \"\"" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340486 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f97b2930-64e9-4f53-94b2-a3cbdb6b43ce","Type":"ContainerDied","Data":"f438ce9452f05a2c33576c461be5d8342246dc4a389096e1ff8d110a343a2c82"} Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340611 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.340757 4907 scope.go:117] "RemoveContainer" containerID="7984064ca1dcff85b740cc99adb7b34caa53c5f6257193fa5be4a5e3dd9a8bf1" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.377695 4907 scope.go:117] "RemoveContainer" containerID="f4b13668a28a72bb72f1ac77a40e49f191cf7ff0408f2b34e10c4e165b48abf6" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.418072 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.457771 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.480660 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: E0127 18:35:03.481137 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481152 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: E0127 18:35:03.481183 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="setup-container" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481189 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="setup-container" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.481460 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" containerName="rabbitmq" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.488231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.540059 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596706 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596869 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.596940 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597063 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.597247 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608263 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608366 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608523 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.608657 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712778 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712940 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.712991 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713101 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713196 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713265 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713310 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713422 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713477 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.713534 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.714155 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.714193 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.716536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.717007 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be9e879-df48-4aea-9f07-b297cabca4f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.719781 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be9e879-df48-4aea-9f07-b297cabca4f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.720611 4907 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.737114 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e84612870a5c0c4830950c12b2fd6510f31530f3fd62287fde6ecf77067364b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.722400 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be9e879-df48-4aea-9f07-b297cabca4f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.723085 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.745674 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.752366 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7kk\" (UniqueName: \"kubernetes.io/projected/0be9e879-df48-4aea-9f07-b297cabca4f3-kube-api-access-ql7kk\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.788387 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97b2930-64e9-4f53-94b2-a3cbdb6b43ce" path="/var/lib/kubelet/pods/f97b2930-64e9-4f53-94b2-a3cbdb6b43ce/volumes" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.854645 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46b822cf-4ef2-46cc-a623-e0ac9e88a23e\") pod \"rabbitmq-server-0\" (UID: \"0be9e879-df48-4aea-9f07-b297cabca4f3\") " pod="openstack/rabbitmq-server-0" Jan 27 18:35:03 crc kubenswrapper[4907]: I0127 18:35:03.876982 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:35:04 crc kubenswrapper[4907]: I0127 18:35:04.549045 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:35:05 crc kubenswrapper[4907]: I0127 18:35:05.367651 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"6aa71536be738a4b95b38c16f4c4fe61914c2fa67302373fde7c5fa831d6a1f1"} Jan 27 18:35:07 crc kubenswrapper[4907]: I0127 18:35:07.392492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765"} Jan 27 18:35:12 crc kubenswrapper[4907]: I0127 18:35:12.748834 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:12 crc kubenswrapper[4907]: E0127 18:35:12.753542 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:26 crc kubenswrapper[4907]: I0127 18:35:26.748091 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:26 crc kubenswrapper[4907]: E0127 18:35:26.749152 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:39 crc kubenswrapper[4907]: I0127 18:35:39.766115 4907 generic.go:334] "Generic (PLEG): container finished" podID="0be9e879-df48-4aea-9f07-b297cabca4f3" containerID="c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765" exitCode=0 Jan 27 18:35:39 crc kubenswrapper[4907]: I0127 18:35:39.766243 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerDied","Data":"c2b48b96b3d6bda9890fa05bd4e999229c048ccc359b8e7fb2ef352c1d69f765"} Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.748633 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:40 crc kubenswrapper[4907]: E0127 18:35:40.749600 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.782713 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be9e879-df48-4aea-9f07-b297cabca4f3","Type":"ContainerStarted","Data":"f07e603f1a8c006b9a7f92c74fe7cb34ea5edaa3d3f2a4619b58674baf6a3b5d"} Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.782941 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:35:40 crc kubenswrapper[4907]: I0127 18:35:40.816079 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.816055087 podStartE2EDuration="37.816055087s" podCreationTimestamp="2026-01-27 18:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:35:40.810720225 +0000 UTC m=+1795.940002837" watchObservedRunningTime="2026-01-27 18:35:40.816055087 +0000 UTC m=+1795.945337709" Jan 27 18:35:53 crc kubenswrapper[4907]: I0127 18:35:53.749653 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:35:53 crc kubenswrapper[4907]: E0127 18:35:53.750809 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:35:53 crc kubenswrapper[4907]: I0127 18:35:53.879787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.071436 4907 scope.go:117] "RemoveContainer" containerID="2aaefe127aed6dba10d995e1c7d462041c3be74278927bb883d380dc5671700b" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.102382 4907 scope.go:117] "RemoveContainer" containerID="f9851fe0ece01f814039aa40d824e3502803d48db20224fbe65365a65acca7f5" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.129240 4907 scope.go:117] "RemoveContainer" containerID="7fb90059097c3a083f21613ee4d5120a76dc2a28cb01ea77d74033a66e97445e" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.817083 4907 scope.go:117] "RemoveContainer" containerID="6b5db1511c211da8819e569899e8589693bd0bdc02842f679cc92b27198c0258" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.890204 4907 scope.go:117] "RemoveContainer" containerID="f1afc28349370dcb3dde6c79a26ee69ec1c1fb55a0e0f4f75240430123f8db92" Jan 27 18:35:55 crc kubenswrapper[4907]: I0127 18:35:55.983915 4907 scope.go:117] "RemoveContainer" containerID="72c74aeeb4de5e1f3042ca5765a544364f882ce81d07c3193c2e4b04c8e2dbd3" Jan 27 18:36:07 crc kubenswrapper[4907]: I0127 18:36:07.748291 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:07 crc kubenswrapper[4907]: E0127 18:36:07.749134 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:22 crc kubenswrapper[4907]: I0127 18:36:22.748628 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:22 crc kubenswrapper[4907]: E0127 18:36:22.749700 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:36 crc kubenswrapper[4907]: I0127 18:36:36.748690 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:36 crc kubenswrapper[4907]: E0127 18:36:36.749483 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.065296 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.081214 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.092702 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1e4c-account-create-update-9hkjc"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.106148 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vqsnx"] Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.760529 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edef5c0-5919-4ddd-93cd-65b569c78603" path="/var/lib/kubelet/pods/5edef5c0-5919-4ddd-93cd-65b569c78603/volumes" Jan 27 18:36:49 crc kubenswrapper[4907]: I0127 18:36:49.762020 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5" path="/var/lib/kubelet/pods/ab30250e-90e1-4d1e-bc1e-7b4cd9fccbc5/volumes" Jan 27 18:36:51 crc kubenswrapper[4907]: I0127 18:36:51.749256 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:36:51 crc kubenswrapper[4907]: E0127 18:36:51.749940 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.035374 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.048254 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.058981 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-69b7-account-create-update-6pfhq"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.069232 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kpsck"] Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.761762 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0adeee4-a225-49f2-8a87-f44aa772d5f2" path="/var/lib/kubelet/pods/a0adeee4-a225-49f2-8a87-f44aa772d5f2/volumes" Jan 27 18:36:55 crc kubenswrapper[4907]: I0127 18:36:55.763132 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1662136-4082-412a-9846-92ea9aff9350" path="/var/lib/kubelet/pods/e1662136-4082-412a-9846-92ea9aff9350/volumes" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.164786 4907 scope.go:117] "RemoveContainer" containerID="44d85d18431154ddbd383c884bcbc74eacef1eade71d6866721522e05fe32ba7" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.190906 4907 scope.go:117] "RemoveContainer" containerID="4d91c25a7314aab9b3fd8d4f969c9d2c94f6673a332760843f4352aa203efe16" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.259527 4907 scope.go:117] "RemoveContainer" containerID="7f3f482aaf8608c33753ad6013ec3d55dce11d1495376c8c771ab3fee9efdee3" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.311742 4907 scope.go:117] "RemoveContainer" containerID="f6666bbbc5694f2bb840d66dd6dd5334ea08b62866d58575c225940d82650561" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.376579 4907 scope.go:117] "RemoveContainer" containerID="439c0e228d54b650aa2d229cddd5634e727c8a516bf39e72c77e909e264787be" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.404028 4907 scope.go:117] "RemoveContainer" containerID="e2fec9f980876bf8fc48b1230ddea98e34e541b132ba4a428836b64324d1589b" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.444145 4907 scope.go:117] "RemoveContainer" containerID="bbe32d131e2f18cc943ec38e3f64224872de067fbdcc4c36da535318442ade1c" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.497101 4907 scope.go:117] "RemoveContainer" containerID="88276e87d3b070cf8843fa34d81f32ef9093bc5ca757768f4520044bd9bd9abd" Jan 27 18:36:56 crc kubenswrapper[4907]: I0127 18:36:56.563248 4907 scope.go:117] "RemoveContainer" containerID="59803aa5ed2bce30a33ad11ee77adc43ad17ca6fa1fac9ff279ab08a8ad25f5d" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.048464 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.068595 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.084894 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-214c-account-create-update-5x6dm"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.101009 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.116392 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-z8s67"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.131455 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9r669"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.141783 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.153385 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.163390 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c84c-account-create-update-4ld5d"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.175101 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0abc-account-create-update-gwjft"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.185929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.196040 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qdj7p"] Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.761636 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef0a2ee-9212-41c9-b2b9-d59602779eef" path="/var/lib/kubelet/pods/0ef0a2ee-9212-41c9-b2b9-d59602779eef/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.762569 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfbf931-f21b-4652-8640-0208df4b40cc" path="/var/lib/kubelet/pods/3dfbf931-f21b-4652-8640-0208df4b40cc/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.763812 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d" path="/var/lib/kubelet/pods/84c433c1-ca56-4d2d-ac7b-0f2ceadcaf8d/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.764401 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f0fdef-b14b-4204-be1e-90a5d19c96e7" path="/var/lib/kubelet/pods/94f0fdef-b14b-4204-be1e-90a5d19c96e7/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.765422 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7319b76-e25b-4370-ac3e-641efd764024" path="/var/lib/kubelet/pods/d7319b76-e25b-4370-ac3e-641efd764024/volumes" Jan 27 18:36:59 crc kubenswrapper[4907]: I0127 18:36:59.766017 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1904c81-5de8-431a-9304-5b4ba1771c73" path="/var/lib/kubelet/pods/f1904c81-5de8-431a-9304-5b4ba1771c73/volumes" Jan 27 18:37:03 crc kubenswrapper[4907]: I0127 18:37:03.748701 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:37:05 crc kubenswrapper[4907]: I0127 18:37:05.018458 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} Jan 27 18:37:21 crc kubenswrapper[4907]: I0127 18:37:21.196216 4907 generic.go:334] "Generic (PLEG): container finished" podID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerID="cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c" exitCode=0 Jan 27 18:37:21 crc kubenswrapper[4907]: I0127 18:37:21.196297 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerDied","Data":"cd3b3fe4cc89215770648734713c16762101d8d8f5528da2b4cc19c06925044c"} Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.724658 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798278 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798583 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798636 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.798705 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") pod \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\" (UID: \"172533fc-3de0-4a67-91d4-d54dbbf6e0e8\") " Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.872128 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.875896 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d" (OuterVolumeSpecName: "kube-api-access-br56d") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "kube-api-access-br56d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.902055 4907 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.902195 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br56d\" (UniqueName: \"kubernetes.io/projected/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-kube-api-access-br56d\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.906622 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:22 crc kubenswrapper[4907]: I0127 18:37:22.910626 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory" (OuterVolumeSpecName: "inventory") pod "172533fc-3de0-4a67-91d4-d54dbbf6e0e8" (UID: "172533fc-3de0-4a67-91d4-d54dbbf6e0e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.004598 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.004634 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/172533fc-3de0-4a67-91d4-d54dbbf6e0e8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.217986 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" event={"ID":"172533fc-3de0-4a67-91d4-d54dbbf6e0e8","Type":"ContainerDied","Data":"beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6"} Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.218038 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beeb809af464f1de247ce2aef34056bd50b453ac1014ef9b475c873dfa140da6" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.218043 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.306596 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:23 crc kubenswrapper[4907]: E0127 18:37:23.307136 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.307154 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.307376 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="172533fc-3de0-4a67-91d4-d54dbbf6e0e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.308261 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311687 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311861 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.311973 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.319648 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.353966 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439680 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439841 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.439960 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542429 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542575 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.542689 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.550856 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.556729 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.568667 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:23 crc kubenswrapper[4907]: I0127 18:37:23.626328 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:37:24 crc kubenswrapper[4907]: I0127 18:37:24.212819 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j"] Jan 27 18:37:25 crc kubenswrapper[4907]: I0127 18:37:25.244728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerStarted","Data":"5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f"} Jan 27 18:37:26 crc kubenswrapper[4907]: I0127 18:37:26.258060 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerStarted","Data":"d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034"} Jan 27 18:37:26 crc kubenswrapper[4907]: I0127 18:37:26.291760 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" podStartSLOduration=1.789014147 podStartE2EDuration="3.291733896s" podCreationTimestamp="2026-01-27 18:37:23 +0000 UTC" firstStartedPulling="2026-01-27 18:37:24.246231641 +0000 UTC m=+1899.375514253" lastFinishedPulling="2026-01-27 18:37:25.74895139 +0000 UTC m=+1900.878234002" observedRunningTime="2026-01-27 18:37:26.28102956 +0000 UTC m=+1901.410312192" watchObservedRunningTime="2026-01-27 18:37:26.291733896 +0000 UTC m=+1901.421016508" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.055542 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.073680 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.095066 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.102929 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.115760 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.128460 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.138028 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.146969 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.155827 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.164601 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-jsvqc"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.173763 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4cxkf"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.182967 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qz6th"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.192167 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4f95-account-create-update-s69m5"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.201738 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fa35-account-create-update-nlm4d"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.211253 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lpvwr"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.219973 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fxqjb"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.229201 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8259-account-create-update-b45js"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.237922 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-f55d-account-create-update-gfk7k"] Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.761296 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b7a898-5d57-496a-8ad1-380b636e3629" path="/var/lib/kubelet/pods/32b7a898-5d57-496a-8ad1-380b636e3629/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.762906 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421865e2-2878-4bc4-9480-7afb5e7133fd" path="/var/lib/kubelet/pods/421865e2-2878-4bc4-9480-7afb5e7133fd/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.764169 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701eaff9-db27-4bff-975c-b8ebf034725f" path="/var/lib/kubelet/pods/701eaff9-db27-4bff-975c-b8ebf034725f/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.766566 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7844ef4e-92dd-4ea6-a792-b255290ef833" path="/var/lib/kubelet/pods/7844ef4e-92dd-4ea6-a792-b255290ef833/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.771448 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c8faae-95fb-4533-b45c-51e91bb95947" path="/var/lib/kubelet/pods/85c8faae-95fb-4533-b45c-51e91bb95947/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.772935 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5cca69-8afc-417f-9f22-93c279730bf7" path="/var/lib/kubelet/pods/ac5cca69-8afc-417f-9f22-93c279730bf7/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.774764 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54f9573-0bd6-4133-872a-b9e73129d654" path="/var/lib/kubelet/pods/b54f9573-0bd6-4133-872a-b9e73129d654/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.776147 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3998964-67eb-4adb-912d-a6367ae3beaf" path="/var/lib/kubelet/pods/c3998964-67eb-4adb-912d-a6367ae3beaf/volumes" Jan 27 18:37:39 crc kubenswrapper[4907]: I0127 18:37:39.778254 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d206e054-cdc8-4a59-9de8-93bfeae80700" path="/var/lib/kubelet/pods/d206e054-cdc8-4a59-9de8-93bfeae80700/volumes" Jan 27 18:37:44 crc kubenswrapper[4907]: I0127 18:37:44.030980 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:37:44 crc kubenswrapper[4907]: I0127 18:37:44.043062 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jjm2k"] Jan 27 18:37:45 crc kubenswrapper[4907]: I0127 18:37:45.766903 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd8fea0-24a6-4212-875a-5cf95105f549" path="/var/lib/kubelet/pods/2dd8fea0-24a6-4212-875a-5cf95105f549/volumes" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.069250 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.081233 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-d856z"] Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.783350 4907 scope.go:117] "RemoveContainer" containerID="10e55fe5e5f3f44965d382e66da77d31f65621ac8cb2c4078f7f47ef99fb45e2" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.834908 4907 scope.go:117] "RemoveContainer" containerID="fe0fec6016bed853e22ca7a88bd8e6b3e7fd78881c47ba5750950f4d5911aee9" Jan 27 18:37:56 crc kubenswrapper[4907]: I0127 18:37:56.998646 4907 scope.go:117] "RemoveContainer" containerID="f4fe6d9aa44983cf3005cdcf2f1caa2f42f6abc6fd5fd5929b1ffc71281905af" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.103826 4907 scope.go:117] "RemoveContainer" containerID="4b4bc386243282ee46469e04f6c8ed985996c9353b6cb7136b8bfb839c0ee6f9" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.137848 4907 scope.go:117] "RemoveContainer" containerID="96f5f54754dcd10e1621eddfd599cc7bbc58a42f87dd064a880d55efea873246" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.186367 4907 scope.go:117] "RemoveContainer" containerID="68b89d92a4036b54b7b4b4e2ad10f550a2312816b916239f6cf930b267395fdc" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.280242 4907 scope.go:117] "RemoveContainer" containerID="3b72532d7cd8d07a853ff3494ad26622a5147545c076d688f217632749ffc944" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.312097 4907 scope.go:117] "RemoveContainer" containerID="902d727f209f42dea64c5a07767c7eefd3763b39fbd8787f8e221e479efe5a44" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.336951 4907 scope.go:117] "RemoveContainer" containerID="948b6eac5d689d6120c4131f15b39236cab8c2fef0b0c2e8b5e2f67979a39d45" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.400629 4907 scope.go:117] "RemoveContainer" containerID="c19d26368aac03d155fdb3c70b0039080c0304f82ccc02493e32e5a1524bf346" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.424284 4907 scope.go:117] "RemoveContainer" containerID="6e9d2124e0377737283913dd9cbf18f7728bb3d38ed97f318b0a2c7e1a625185" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.471042 4907 scope.go:117] "RemoveContainer" containerID="3e594d56c4f1e528436d6bb4f406deabb15b3cc82f5b2f614f1632a7cd5cb661" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.560205 4907 scope.go:117] "RemoveContainer" containerID="7e112e59f5539451246e55f428962aa397a6f7440a0b99d285fc7caa5e097dae" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.591203 4907 scope.go:117] "RemoveContainer" containerID="63ae2b63f7a45c10875f978382e0401747b3a11acdd681eda189d55c63e35186" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.619180 4907 scope.go:117] "RemoveContainer" containerID="40e8e634f7c46a3b2a6980b5fabdea0883786df5d7e952882f0176d870d9c0b4" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.657912 4907 scope.go:117] "RemoveContainer" containerID="a0326a0a501bbf85df41833a1dcafeaa580f24dd04c07b7e0136b03e2680cb1d" Jan 27 18:37:57 crc kubenswrapper[4907]: I0127 18:37:57.778064 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2cf5dd-be65-4237-b77e-9bcc84cd26de" path="/var/lib/kubelet/pods/1e2cf5dd-be65-4237-b77e-9bcc84cd26de/volumes" Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.040082 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.052759 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8p796"] Jan 27 18:38:19 crc kubenswrapper[4907]: I0127 18:38:19.765366 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e539a06-3352-4163-a259-6fd53182fe02" path="/var/lib/kubelet/pods/9e539a06-3352-4163-a259-6fd53182fe02/volumes" Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.034448 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.044988 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.058379 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-px4wp"] Jan 27 18:38:32 crc kubenswrapper[4907]: I0127 18:38:32.069897 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6gppf"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.037388 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.049801 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x9tl4"] Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.762856 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3838ba-a929-4aab-a58d-dd4f39628f00" path="/var/lib/kubelet/pods/3d3838ba-a929-4aab-a58d-dd4f39628f00/volumes" Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.764206 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b745a073-e4cf-471d-92ce-ac5da568b38e" path="/var/lib/kubelet/pods/b745a073-e4cf-471d-92ce-ac5da568b38e/volumes" Jan 27 18:38:33 crc kubenswrapper[4907]: I0127 18:38:33.766415 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de23a4c9-a62e-4523-8480-b19f3f10f586" path="/var/lib/kubelet/pods/de23a4c9-a62e-4523-8480-b19f3f10f586/volumes" Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.043750 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.070723 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kbngs"] Jan 27 18:38:49 crc kubenswrapper[4907]: I0127 18:38:49.761325 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c" path="/var/lib/kubelet/pods/fcd5520e-0de7-4c3c-b1e7-9f4e4a37030c/volumes" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.192052 4907 scope.go:117] "RemoveContainer" containerID="89581dcdb8d4c922466b9ce122633bb71ff2a690ee6340da6db9f720efc193a2" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.219074 4907 scope.go:117] "RemoveContainer" containerID="c6ec5c767366a96ce4d265d6ebdb584e1c40e865966b1cddfe60f049c2cfcbf9" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.283459 4907 scope.go:117] "RemoveContainer" containerID="b27d9c0fcfb493cd20b36c4d0cecc4afcfa83f18386b3491676b63c3ccd64964" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.352041 4907 scope.go:117] "RemoveContainer" containerID="7c80b303772f301c239f6686efd8654edcc36c31a198990442336d23f2216d7c" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.409432 4907 scope.go:117] "RemoveContainer" containerID="3518bec6a2e71252950966bac08f219ba89fc2257f1b77a5d56f0854105b5f87" Jan 27 18:38:58 crc kubenswrapper[4907]: I0127 18:38:58.487034 4907 scope.go:117] "RemoveContainer" containerID="8fc0bac54c69cf6fe462be2636919fc30b1e5a1988f7c83b7d0f943527b1e3fc" Jan 27 18:39:20 crc kubenswrapper[4907]: I0127 18:39:20.540946 4907 generic.go:334] "Generic (PLEG): container finished" podID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerID="d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034" exitCode=0 Jan 27 18:39:20 crc kubenswrapper[4907]: I0127 18:39:20.541046 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerDied","Data":"d41a0650651648b9dc8466dd8517d8fbe456875cb9074e13651305905101e034"} Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.033392 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220494 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220699 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.220866 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") pod \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\" (UID: \"ad792b6c-ce47-4ef4-964c-e91423a94f1b\") " Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.227141 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw" (OuterVolumeSpecName: "kube-api-access-7g9qw") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "kube-api-access-7g9qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.266119 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory" (OuterVolumeSpecName: "inventory") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.277467 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad792b6c-ce47-4ef4-964c-e91423a94f1b" (UID: "ad792b6c-ce47-4ef4-964c-e91423a94f1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324139 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324175 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad792b6c-ce47-4ef4-964c-e91423a94f1b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.324190 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g9qw\" (UniqueName: \"kubernetes.io/projected/ad792b6c-ce47-4ef4-964c-e91423a94f1b-kube-api-access-7g9qw\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566023 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" event={"ID":"ad792b6c-ce47-4ef4-964c-e91423a94f1b","Type":"ContainerDied","Data":"5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f"} Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566064 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fad2c5feb01370ee16f93dca8ec3bd45d6de814e0cf3a459e134697d4374f3f" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.566073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.658586 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:22 crc kubenswrapper[4907]: E0127 18:39:22.659105 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.659125 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.659495 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad792b6c-ce47-4ef4-964c-e91423a94f1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.660454 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.670937 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.670979 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.671105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.671438 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.677821 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.837303 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.837913 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.838047 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940327 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.940388 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.945006 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.948050 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.959311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:22 crc kubenswrapper[4907]: I0127 18:39:22.994043 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.556753 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5"] Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.557462 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:39:23 crc kubenswrapper[4907]: I0127 18:39:23.582678 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerStarted","Data":"2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627"} Jan 27 18:39:24 crc kubenswrapper[4907]: I0127 18:39:24.593026 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerStarted","Data":"27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4"} Jan 27 18:39:24 crc kubenswrapper[4907]: I0127 18:39:24.613358 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" podStartSLOduration=2.202721019 podStartE2EDuration="2.613338921s" podCreationTimestamp="2026-01-27 18:39:22 +0000 UTC" firstStartedPulling="2026-01-27 18:39:23.557203694 +0000 UTC m=+2018.686486306" lastFinishedPulling="2026-01-27 18:39:23.967821596 +0000 UTC m=+2019.097104208" observedRunningTime="2026-01-27 18:39:24.608205014 +0000 UTC m=+2019.737487666" watchObservedRunningTime="2026-01-27 18:39:24.613338921 +0000 UTC m=+2019.742621543" Jan 27 18:39:26 crc kubenswrapper[4907]: I0127 18:39:26.521250 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:39:26 crc kubenswrapper[4907]: I0127 18:39:26.521761 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.733900 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.738405 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.776337 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874461 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874511 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.874534 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976760 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976812 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.976834 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.977257 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.977330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:47 crc kubenswrapper[4907]: I0127 18:39:47.997297 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"community-operators-fddws\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.062483 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.559397 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.882154 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} Jan 27 18:39:48 crc kubenswrapper[4907]: I0127 18:39:48.882568 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"d870dde3209bdd6b610732d708524f925a35bfaed65822d0ede635108e245da8"} Jan 27 18:39:49 crc kubenswrapper[4907]: I0127 18:39:49.895415 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" exitCode=0 Jan 27 18:39:49 crc kubenswrapper[4907]: I0127 18:39:49.895664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} Jan 27 18:39:50 crc kubenswrapper[4907]: I0127 18:39:50.911513 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} Jan 27 18:39:52 crc kubenswrapper[4907]: I0127 18:39:52.934928 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" exitCode=0 Jan 27 18:39:52 crc kubenswrapper[4907]: I0127 18:39:52.935047 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} Jan 27 18:39:53 crc kubenswrapper[4907]: I0127 18:39:53.961509 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerStarted","Data":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} Jan 27 18:39:53 crc kubenswrapper[4907]: I0127 18:39:53.992983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fddws" podStartSLOduration=3.559044312 podStartE2EDuration="6.992961549s" podCreationTimestamp="2026-01-27 18:39:47 +0000 UTC" firstStartedPulling="2026-01-27 18:39:49.898742506 +0000 UTC m=+2045.028025118" lastFinishedPulling="2026-01-27 18:39:53.332659743 +0000 UTC m=+2048.461942355" observedRunningTime="2026-01-27 18:39:53.982245794 +0000 UTC m=+2049.111528406" watchObservedRunningTime="2026-01-27 18:39:53.992961549 +0000 UTC m=+2049.122244171" Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.051483 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.063633 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.073328 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lb6rn"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.082537 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nlfm6"] Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.521121 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:39:56 crc kubenswrapper[4907]: I0127 18:39:56.521189 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.039712 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.050096 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-r6sfn"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.062230 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.073380 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.085853 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4610-account-create-update-8lfzv"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.097097 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b6e2-account-create-update-fr784"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.109013 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.122943 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3c7d-account-create-update-f8kts"] Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.768219 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1567baee-fe0b-481f-9aca-c424237d77fd" path="/var/lib/kubelet/pods/1567baee-fe0b-481f-9aca-c424237d77fd/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.771212 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bda35b-bb7e-40c5-a263-56fdb4a28784" path="/var/lib/kubelet/pods/22bda35b-bb7e-40c5-a263-56fdb4a28784/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.772243 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743ace74-8ac2-43c7-807c-47379f8c50f4" path="/var/lib/kubelet/pods/743ace74-8ac2-43c7-807c-47379f8c50f4/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.773411 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374" path="/var/lib/kubelet/pods/94d94b6e-5cd1-445c-8d8a-ec9ab1bfd374/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.775263 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd63a47-2bbf-455b-8732-8d489507a2a0" path="/var/lib/kubelet/pods/9fd63a47-2bbf-455b-8732-8d489507a2a0/volumes" Jan 27 18:39:57 crc kubenswrapper[4907]: I0127 18:39:57.776170 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db79947d-82c1-4b66-8f0d-d34b96ff9a16" path="/var/lib/kubelet/pods/db79947d-82c1-4b66-8f0d-d34b96ff9a16/volumes" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.063577 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.063631 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.737216 4907 scope.go:117] "RemoveContainer" containerID="5dcd6a3a423875bc06c0dd0f5d51a2b87f68629b62084c20315ec4e27b26da69" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.770770 4907 scope.go:117] "RemoveContainer" containerID="34a12c9dc8f38270982510114a24e7ea3a049e13c05d33bd9ae10ee514d5899f" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.842787 4907 scope.go:117] "RemoveContainer" containerID="d03e471c14044aaf78991f516c4ab946c86f770f14b401efd31a543a61a45271" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.902665 4907 scope.go:117] "RemoveContainer" containerID="2bccdaf75b95d0168a686ecc348808d6673dada9c3494bcaf8bc20faf0ab6f1c" Jan 27 18:39:58 crc kubenswrapper[4907]: I0127 18:39:58.981023 4907 scope.go:117] "RemoveContainer" containerID="0aa13b29a06fede5edeefa1bbecf4c945c7dad2111ce30f628e25763e56679c4" Jan 27 18:39:59 crc kubenswrapper[4907]: I0127 18:39:59.086007 4907 scope.go:117] "RemoveContainer" containerID="9ed99e608fb935435599432d30ba239373e7950b5f2343e25af6cc133d593e4b" Jan 27 18:39:59 crc kubenswrapper[4907]: I0127 18:39:59.119067 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fddws" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" probeResult="failure" output=< Jan 27 18:39:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:39:59 crc kubenswrapper[4907]: > Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.134488 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.187328 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:08 crc kubenswrapper[4907]: I0127 18:40:08.374545 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.138282 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fddws" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" containerID="cri-o://aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" gracePeriod=2 Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.729252 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.862953 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.863468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.863520 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") pod \"ec66903e-4bd3-45bc-915d-4c46b7f50550\" (UID: \"ec66903e-4bd3-45bc-915d-4c46b7f50550\") " Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.866201 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities" (OuterVolumeSpecName: "utilities") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.869783 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57" (OuterVolumeSpecName: "kube-api-access-mzx57") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "kube-api-access-mzx57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.919062 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec66903e-4bd3-45bc-915d-4c46b7f50550" (UID: "ec66903e-4bd3-45bc-915d-4c46b7f50550"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966043 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966089 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec66903e-4bd3-45bc-915d-4c46b7f50550-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:10 crc kubenswrapper[4907]: I0127 18:40:10.966101 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzx57\" (UniqueName: \"kubernetes.io/projected/ec66903e-4bd3-45bc-915d-4c46b7f50550-kube-api-access-mzx57\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149451 4907 generic.go:334] "Generic (PLEG): container finished" podID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" exitCode=0 Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149492 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149518 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fddws" event={"ID":"ec66903e-4bd3-45bc-915d-4c46b7f50550","Type":"ContainerDied","Data":"d870dde3209bdd6b610732d708524f925a35bfaed65822d0ede635108e245da8"} Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fddws" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.149533 4907 scope.go:117] "RemoveContainer" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.177300 4907 scope.go:117] "RemoveContainer" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.188576 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.198753 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fddws"] Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.216580 4907 scope.go:117] "RemoveContainer" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251314 4907 scope.go:117] "RemoveContainer" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.251833 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": container with ID starting with aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2 not found: ID does not exist" containerID="aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251876 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2"} err="failed to get container status \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": rpc error: code = NotFound desc = could not find container \"aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2\": container with ID starting with aa6d33ba9a6733e3a8a715f90f9a7ee4e2f1ba970cb092efaabf2015401078a2 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.251903 4907 scope.go:117] "RemoveContainer" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.252493 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": container with ID starting with f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22 not found: ID does not exist" containerID="f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.252523 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22"} err="failed to get container status \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": rpc error: code = NotFound desc = could not find container \"f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22\": container with ID starting with f9a1dd4d1ed5016261de86dcc20a76442f957a4991f04d62723f0e469ea5ec22 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.252539 4907 scope.go:117] "RemoveContainer" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: E0127 18:40:11.252942 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": container with ID starting with 5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274 not found: ID does not exist" containerID="5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.253005 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274"} err="failed to get container status \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": rpc error: code = NotFound desc = could not find container \"5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274\": container with ID starting with 5223f711fd7e16b4f2ae5fba2999bc9c2135bb7c340db7458ae90ae67ad87274 not found: ID does not exist" Jan 27 18:40:11 crc kubenswrapper[4907]: I0127 18:40:11.765926 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" path="/var/lib/kubelet/pods/ec66903e-4bd3-45bc-915d-4c46b7f50550/volumes" Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.039440 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.048965 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nfn2m"] Jan 27 18:40:25 crc kubenswrapper[4907]: I0127 18:40:25.765983 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0230a81d-2f87-4ad2-a9b5-19cfd369f0b4" path="/var/lib/kubelet/pods/0230a81d-2f87-4ad2-a9b5-19cfd369f0b4/volumes" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521693 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521740 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.521781 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.522671 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:40:26 crc kubenswrapper[4907]: I0127 18:40:26.522717 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" gracePeriod=600 Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.363673 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" exitCode=0 Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.363765 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61"} Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.364430 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} Jan 27 18:40:27 crc kubenswrapper[4907]: I0127 18:40:27.364472 4907 scope.go:117] "RemoveContainer" containerID="b25a65b3b788ffb7511c95bae6fd546df66105752739542453ee882efb354402" Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.030340 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.042836 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.056175 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gqf7g"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.064050 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-368c-account-create-update-vclbz"] Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.759198 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cabef78-d5b3-4e61-9aa1-0f0529701fa0" path="/var/lib/kubelet/pods/3cabef78-d5b3-4e61-9aa1-0f0529701fa0/volumes" Jan 27 18:40:29 crc kubenswrapper[4907]: I0127 18:40:29.760099 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b430d70c-f51d-4ffd-856f-4035b5d053b7" path="/var/lib/kubelet/pods/b430d70c-f51d-4ffd-856f-4035b5d053b7/volumes" Jan 27 18:40:47 crc kubenswrapper[4907]: I0127 18:40:47.603386 4907 generic.go:334] "Generic (PLEG): container finished" podID="0aabc401-314e-438d-920e-1f984949944c" containerID="27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4" exitCode=0 Jan 27 18:40:47 crc kubenswrapper[4907]: I0127 18:40:47.603637 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerDied","Data":"27500bcc72d151a572ed87e5e290a7f578c943044490d5b92431fe2a66525be4"} Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.106158 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.265955 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.266420 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.266568 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") pod \"0aabc401-314e-438d-920e-1f984949944c\" (UID: \"0aabc401-314e-438d-920e-1f984949944c\") " Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.274214 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l" (OuterVolumeSpecName: "kube-api-access-wrj8l") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "kube-api-access-wrj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.297853 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.305761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory" (OuterVolumeSpecName: "inventory") pod "0aabc401-314e-438d-920e-1f984949944c" (UID: "0aabc401-314e-438d-920e-1f984949944c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369651 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369692 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrj8l\" (UniqueName: \"kubernetes.io/projected/0aabc401-314e-438d-920e-1f984949944c-kube-api-access-wrj8l\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.369708 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aabc401-314e-438d-920e-1f984949944c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" event={"ID":"0aabc401-314e-438d-920e-1f984949944c","Type":"ContainerDied","Data":"2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627"} Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626051 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f310d8883a1b1ec42ffd46d55f5e6c7fe1d1fb6b0cd2987a556d596484d7627" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.626419 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.715609 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716431 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716449 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716682 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716690 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716726 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-content" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716734 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-content" Jan 27 18:40:49 crc kubenswrapper[4907]: E0127 18:40:49.716751 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-utilities" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716759 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="extract-utilities" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.716999 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aabc401-314e-438d-920e-1f984949944c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.717034 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec66903e-4bd3-45bc-915d-4c46b7f50550" containerName="registry-server" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.717936 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.719911 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720264 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720393 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.720502 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.762088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.880788 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984456 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.984735 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.992679 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:49 crc kubenswrapper[4907]: I0127 18:40:49.992728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.003605 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.037341 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.062898 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.077326 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-749bg"] Jan 27 18:40:50 crc kubenswrapper[4907]: I0127 18:40:50.747236 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2"] Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.655544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerStarted","Data":"dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476"} Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.656250 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerStarted","Data":"7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9"} Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.688113 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" podStartSLOduration=2.262152461 podStartE2EDuration="2.688092692s" podCreationTimestamp="2026-01-27 18:40:49 +0000 UTC" firstStartedPulling="2026-01-27 18:40:50.754720505 +0000 UTC m=+2105.884003117" lastFinishedPulling="2026-01-27 18:40:51.180660736 +0000 UTC m=+2106.309943348" observedRunningTime="2026-01-27 18:40:51.679913889 +0000 UTC m=+2106.809196501" watchObservedRunningTime="2026-01-27 18:40:51.688092692 +0000 UTC m=+2106.817375304" Jan 27 18:40:51 crc kubenswrapper[4907]: I0127 18:40:51.765523 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c0d1c7-cc84-4792-be06-ce4535d854f1" path="/var/lib/kubelet/pods/73c0d1c7-cc84-4792-be06-ce4535d854f1/volumes" Jan 27 18:40:57 crc kubenswrapper[4907]: I0127 18:40:57.719855 4907 generic.go:334] "Generic (PLEG): container finished" podID="907876b3-4761-4612-9c26-3479222c6b72" containerID="dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476" exitCode=0 Jan 27 18:40:57 crc kubenswrapper[4907]: I0127 18:40:57.720345 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerDied","Data":"dec53ee57bf7a93caf0b60d22e09d441a69ffc6e474f5240205d3033cd48d476"} Jan 27 18:40:58 crc kubenswrapper[4907]: I0127 18:40:58.032269 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:40:58 crc kubenswrapper[4907]: I0127 18:40:58.044479 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nr6n7"] Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.276979 4907 scope.go:117] "RemoveContainer" containerID="8b3f4a3edfa3e0499e7c2d7527a3165ef93a166220b850ffa84c2a695cc34f3c" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.426251 4907 scope.go:117] "RemoveContainer" containerID="bdb6c3b8d10b65b8359e6341b59fb087ae09186111397640faa7d69faf5d0b39" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.467292 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.493867 4907 scope.go:117] "RemoveContainer" containerID="1592ddc9ada7089f0a97767680308696c33b12e7b32b0616f4ee01e0285b7838" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550086 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550169 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.550468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") pod \"907876b3-4761-4612-9c26-3479222c6b72\" (UID: \"907876b3-4761-4612-9c26-3479222c6b72\") " Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.558427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk" (OuterVolumeSpecName: "kube-api-access-xf8fk") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "kube-api-access-xf8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.561000 4907 scope.go:117] "RemoveContainer" containerID="bcadd918583503f919d13b0b59f8aab8c38430332c4b149a0d7656fa676f51fb" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.588063 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.598145 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory" (OuterVolumeSpecName: "inventory") pod "907876b3-4761-4612-9c26-3479222c6b72" (UID: "907876b3-4761-4612-9c26-3479222c6b72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660126 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660197 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf8fk\" (UniqueName: \"kubernetes.io/projected/907876b3-4761-4612-9c26-3479222c6b72-kube-api-access-xf8fk\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.660234 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907876b3-4761-4612-9c26-3479222c6b72-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" event={"ID":"907876b3-4761-4612-9c26-3479222c6b72","Type":"ContainerDied","Data":"7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9"} Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739205 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba215416677c70483f4396176127eec56d330fe9d96dcabec601cfa3194bcb9" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.739250 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.760951 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9b4dfd-c141-4a97-9656-3f48e7a04309" path="/var/lib/kubelet/pods/8f9b4dfd-c141-4a97-9656-3f48e7a04309/volumes" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.829055 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:40:59 crc kubenswrapper[4907]: E0127 18:40:59.829720 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.829743 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.830049 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="907876b3-4761-4612-9c26-3479222c6b72" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.831054 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833205 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833265 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.833651 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.838382 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.844631 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.864786 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.865335 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.865524 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967452 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.967541 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.971950 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.972264 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:40:59 crc kubenswrapper[4907]: I0127 18:40:59.983701 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fhxmq\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.150621 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.719027 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq"] Jan 27 18:41:00 crc kubenswrapper[4907]: I0127 18:41:00.758085 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerStarted","Data":"2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405"} Jan 27 18:41:01 crc kubenswrapper[4907]: I0127 18:41:01.776119 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerStarted","Data":"43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd"} Jan 27 18:41:01 crc kubenswrapper[4907]: I0127 18:41:01.806403 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" podStartSLOduration=2.377653381 podStartE2EDuration="2.806384821s" podCreationTimestamp="2026-01-27 18:40:59 +0000 UTC" firstStartedPulling="2026-01-27 18:41:00.71786733 +0000 UTC m=+2115.847149942" lastFinishedPulling="2026-01-27 18:41:01.14659877 +0000 UTC m=+2116.275881382" observedRunningTime="2026-01-27 18:41:01.792639739 +0000 UTC m=+2116.921922351" watchObservedRunningTime="2026-01-27 18:41:01.806384821 +0000 UTC m=+2116.935667443" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.670392 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.673830 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.679494 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.679652 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.680131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.685411 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781534 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781716 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.781776 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.783046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.783064 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:03 crc kubenswrapper[4907]: I0127 18:41:03.805000 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"redhat-operators-nkhsv\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.005145 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:04 crc kubenswrapper[4907]: W0127 18:41:04.496633 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dcc99a8_ae36_4946_9470_e14bf668096c.slice/crio-a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e WatchSource:0}: Error finding container a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e: Status 404 returned error can't find the container with id a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.505513 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809484 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" exitCode=0 Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809709 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5"} Jan 27 18:41:04 crc kubenswrapper[4907]: I0127 18:41:04.809833 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e"} Jan 27 18:41:05 crc kubenswrapper[4907]: I0127 18:41:05.857719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} Jan 27 18:41:11 crc kubenswrapper[4907]: I0127 18:41:11.921402 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" exitCode=0 Jan 27 18:41:11 crc kubenswrapper[4907]: I0127 18:41:11.921453 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} Jan 27 18:41:12 crc kubenswrapper[4907]: I0127 18:41:12.933608 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerStarted","Data":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} Jan 27 18:41:12 crc kubenswrapper[4907]: I0127 18:41:12.962012 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkhsv" podStartSLOduration=2.431891185 podStartE2EDuration="9.961990072s" podCreationTimestamp="2026-01-27 18:41:03 +0000 UTC" firstStartedPulling="2026-01-27 18:41:04.811571269 +0000 UTC m=+2119.940853881" lastFinishedPulling="2026-01-27 18:41:12.341670146 +0000 UTC m=+2127.470952768" observedRunningTime="2026-01-27 18:41:12.950219486 +0000 UTC m=+2128.079502098" watchObservedRunningTime="2026-01-27 18:41:12.961990072 +0000 UTC m=+2128.091272684" Jan 27 18:41:14 crc kubenswrapper[4907]: I0127 18:41:14.005656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:14 crc kubenswrapper[4907]: I0127 18:41:14.006059 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:15 crc kubenswrapper[4907]: I0127 18:41:15.070249 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:15 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:15 crc kubenswrapper[4907]: > Jan 27 18:41:25 crc kubenswrapper[4907]: I0127 18:41:25.059802 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:25 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:25 crc kubenswrapper[4907]: > Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.181327 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.187995 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.196346 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286866 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286912 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.286980 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.388977 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389045 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.389479 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.390060 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.411078 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"redhat-marketplace-6pdrz\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:31 crc kubenswrapper[4907]: I0127 18:41:31.515571 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:32 crc kubenswrapper[4907]: I0127 18:41:32.078489 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:32 crc kubenswrapper[4907]: I0127 18:41:32.151411 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"58a3f7e832276e4b80160099365d2143d3059adc76e75c5badff9b50d717c5c6"} Jan 27 18:41:33 crc kubenswrapper[4907]: I0127 18:41:33.163068 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" exitCode=0 Jan 27 18:41:33 crc kubenswrapper[4907]: I0127 18:41:33.163146 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860"} Jan 27 18:41:34 crc kubenswrapper[4907]: I0127 18:41:34.210420 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.044302 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.056197 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:35 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:35 crc kubenswrapper[4907]: > Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.060415 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q8zd6"] Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.221543 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" exitCode=0 Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.221595 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} Jan 27 18:41:35 crc kubenswrapper[4907]: I0127 18:41:35.760211 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52256d78-f327-4af2-9452-0483ad62dea0" path="/var/lib/kubelet/pods/52256d78-f327-4af2-9452-0483ad62dea0/volumes" Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.245134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerStarted","Data":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.272902 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pdrz" podStartSLOduration=3.373450988 podStartE2EDuration="6.272881458s" podCreationTimestamp="2026-01-27 18:41:31 +0000 UTC" firstStartedPulling="2026-01-27 18:41:33.165092197 +0000 UTC m=+2148.294374809" lastFinishedPulling="2026-01-27 18:41:36.064522667 +0000 UTC m=+2151.193805279" observedRunningTime="2026-01-27 18:41:37.26279 +0000 UTC m=+2152.392072632" watchObservedRunningTime="2026-01-27 18:41:37.272881458 +0000 UTC m=+2152.402164070" Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.572184 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:37 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:37 crc kubenswrapper[4907]: > Jan 27 18:41:37 crc kubenswrapper[4907]: I0127 18:41:37.574524 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output=< Jan 27 18:41:37 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:41:37 crc kubenswrapper[4907]: > Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.516185 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.516804 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:41 crc kubenswrapper[4907]: I0127 18:41:41.577956 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.304015 4907 generic.go:334] "Generic (PLEG): container finished" podID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerID="43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd" exitCode=0 Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.304155 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerDied","Data":"43c478894d2881d7995a63c0dfb289493d31a52495aafb982e64cf8e7f4f6ffd"} Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.399015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:42 crc kubenswrapper[4907]: I0127 18:41:42.456113 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.833014 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.934878 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.935065 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.935084 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") pod \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\" (UID: \"daa3c495-5c9e-45cf-b66a-c452e54e9c06\") " Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.941920 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9" (OuterVolumeSpecName: "kube-api-access-j5gt9") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "kube-api-access-j5gt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.969905 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory" (OuterVolumeSpecName: "inventory") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:41:43 crc kubenswrapper[4907]: I0127 18:41:43.970377 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "daa3c495-5c9e-45cf-b66a-c452e54e9c06" (UID: "daa3c495-5c9e-45cf-b66a-c452e54e9c06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038457 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038817 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gt9\" (UniqueName: \"kubernetes.io/projected/daa3c495-5c9e-45cf-b66a-c452e54e9c06-kube-api-access-j5gt9\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.038832 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa3c495-5c9e-45cf-b66a-c452e54e9c06-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.067255 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.123739 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335363 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" event={"ID":"daa3c495-5c9e-45cf-b66a-c452e54e9c06","Type":"ContainerDied","Data":"2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405"} Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335409 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d04c8b8d7e58c32d8c78d8580c4707b02e5356b1c446bbeabb54375a5b80405" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335573 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fhxmq" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.335653 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pdrz" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" containerID="cri-o://bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" gracePeriod=2 Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.438773 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:44 crc kubenswrapper[4907]: E0127 18:41:44.439305 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.439324 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.439631 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa3c495-5c9e-45cf-b66a-c452e54e9c06" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.440526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.442736 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.442836 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.443047 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.444086 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.450727 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554599 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554727 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.554803 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.656839 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.656987 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.657248 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.662728 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.668194 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.679922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.819026 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:41:44 crc kubenswrapper[4907]: I0127 18:41:44.950342 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.066912 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.067343 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.067399 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") pod \"8612ce7f-2609-418c-a907-fc9d4a14d650\" (UID: \"8612ce7f-2609-418c-a907-fc9d4a14d650\") " Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.068773 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities" (OuterVolumeSpecName: "utilities") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.072932 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz" (OuterVolumeSpecName: "kube-api-access-v4xjz") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "kube-api-access-v4xjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.093282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8612ce7f-2609-418c-a907-fc9d4a14d650" (UID: "8612ce7f-2609-418c-a907-fc9d4a14d650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173412 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173471 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4xjz\" (UniqueName: \"kubernetes.io/projected/8612ce7f-2609-418c-a907-fc9d4a14d650-kube-api-access-v4xjz\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.173482 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8612ce7f-2609-418c-a907-fc9d4a14d650-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352222 4907 generic.go:334] "Generic (PLEG): container finished" podID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" exitCode=0 Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352272 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352326 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pdrz" event={"ID":"8612ce7f-2609-418c-a907-fc9d4a14d650","Type":"ContainerDied","Data":"58a3f7e832276e4b80160099365d2143d3059adc76e75c5badff9b50d717c5c6"} Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352347 4907 scope.go:117] "RemoveContainer" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.352348 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pdrz" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.391734 4907 scope.go:117] "RemoveContainer" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.424284 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.435132 4907 scope.go:117] "RemoveContainer" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.438201 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pdrz"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.449260 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.449525 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkhsv" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" containerID="cri-o://57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" gracePeriod=2 Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.479921 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn"] Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480094 4907 scope.go:117] "RemoveContainer" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.480652 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": container with ID starting with bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441 not found: ID does not exist" containerID="bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480698 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441"} err="failed to get container status \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": rpc error: code = NotFound desc = could not find container \"bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441\": container with ID starting with bf803d1987968bbca7e4ba84d48b76d6616b1d9692740eba611106181f5bc441 not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.480728 4907 scope.go:117] "RemoveContainer" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.481016 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": container with ID starting with 9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed not found: ID does not exist" containerID="9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481042 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed"} err="failed to get container status \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": rpc error: code = NotFound desc = could not find container \"9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed\": container with ID starting with 9ccdbf2a490579106169d563b6901d5000953651f02afa5fdf9c61e41a7e92ed not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481057 4907 scope.go:117] "RemoveContainer" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: E0127 18:41:45.481321 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": container with ID starting with cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860 not found: ID does not exist" containerID="cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.481355 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860"} err="failed to get container status \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": rpc error: code = NotFound desc = could not find container \"cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860\": container with ID starting with cb1786725f2c5d4166096d0ad96dea400f2b9ae9d0f3b662eb220dbdfee87860 not found: ID does not exist" Jan 27 18:41:45 crc kubenswrapper[4907]: I0127 18:41:45.790211 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" path="/var/lib/kubelet/pods/8612ce7f-2609-418c-a907-fc9d4a14d650/volumes" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.011623 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.045773 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.209352 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.209985 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.210267 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") pod \"6dcc99a8-ae36-4946-9470-e14bf668096c\" (UID: \"6dcc99a8-ae36-4946-9470-e14bf668096c\") " Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.211700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities" (OuterVolumeSpecName: "utilities") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.214238 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2" (OuterVolumeSpecName: "kube-api-access-q5wd2") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "kube-api-access-q5wd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.313106 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.313147 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5wd2\" (UniqueName: \"kubernetes.io/projected/6dcc99a8-ae36-4946-9470-e14bf668096c-kube-api-access-q5wd2\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.342938 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dcc99a8-ae36-4946-9470-e14bf668096c" (UID: "6dcc99a8-ae36-4946-9470-e14bf668096c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367290 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" exitCode=0 Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367335 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkhsv" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367353 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367409 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkhsv" event={"ID":"6dcc99a8-ae36-4946-9470-e14bf668096c","Type":"ContainerDied","Data":"a3f30fc493a12a2cf864ff3a1cd28b9b80e9a7e2fc4c4c0c5e46f2845c6c8e9e"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.367433 4907 scope.go:117] "RemoveContainer" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.375842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerStarted","Data":"b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.375887 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerStarted","Data":"2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8"} Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.401872 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" podStartSLOduration=1.8510723740000001 podStartE2EDuration="2.401848476s" podCreationTimestamp="2026-01-27 18:41:44 +0000 UTC" firstStartedPulling="2026-01-27 18:41:45.490518089 +0000 UTC m=+2160.619800691" lastFinishedPulling="2026-01-27 18:41:46.041294181 +0000 UTC m=+2161.170576793" observedRunningTime="2026-01-27 18:41:46.392368416 +0000 UTC m=+2161.521651038" watchObservedRunningTime="2026-01-27 18:41:46.401848476 +0000 UTC m=+2161.531131088" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.404490 4907 scope.go:117] "RemoveContainer" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.415057 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcc99a8-ae36-4946-9470-e14bf668096c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.424521 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.435940 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkhsv"] Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.443711 4907 scope.go:117] "RemoveContainer" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.467317 4907 scope.go:117] "RemoveContainer" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.468128 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": container with ID starting with 57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c not found: ID does not exist" containerID="57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468168 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c"} err="failed to get container status \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": rpc error: code = NotFound desc = could not find container \"57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c\": container with ID starting with 57b39177b61f7cac49e01b715ec05ece016383c75349eb3febc94892c519b54c not found: ID does not exist" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468195 4907 scope.go:117] "RemoveContainer" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.468590 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": container with ID starting with 00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691 not found: ID does not exist" containerID="00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468628 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691"} err="failed to get container status \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": rpc error: code = NotFound desc = could not find container \"00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691\": container with ID starting with 00dab29203dc65414de8955b2e9379ab39864b931fd57e9bf2276dd4ccdd0691 not found: ID does not exist" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.468654 4907 scope.go:117] "RemoveContainer" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: E0127 18:41:46.469157 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": container with ID starting with 5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5 not found: ID does not exist" containerID="5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5" Jan 27 18:41:46 crc kubenswrapper[4907]: I0127 18:41:46.469198 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5"} err="failed to get container status \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": rpc error: code = NotFound desc = could not find container \"5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5\": container with ID starting with 5707cb8a5d4ddda8749f1b32cd58222a85ab7c1f2d6ddd00ee4930e91f46ecd5 not found: ID does not exist" Jan 27 18:41:47 crc kubenswrapper[4907]: I0127 18:41:47.764020 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" path="/var/lib/kubelet/pods/6dcc99a8-ae36-4946-9470-e14bf668096c/volumes" Jan 27 18:41:59 crc kubenswrapper[4907]: I0127 18:41:59.734095 4907 scope.go:117] "RemoveContainer" containerID="1f6118408a31d5a5e77efd770c6620f1c689b1f1d408e1f2ae98b9f2c6e384d3" Jan 27 18:41:59 crc kubenswrapper[4907]: I0127 18:41:59.781352 4907 scope.go:117] "RemoveContainer" containerID="9edeb33b4a8de205d14550b1bca2dae8e8b09e2147f0ba6205d2d29a2866b38b" Jan 27 18:42:26 crc kubenswrapper[4907]: I0127 18:42:26.521388 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:42:26 crc kubenswrapper[4907]: I0127 18:42:26.522027 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:42:41 crc kubenswrapper[4907]: I0127 18:42:41.040015 4907 generic.go:334] "Generic (PLEG): container finished" podID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerID="b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6" exitCode=0 Jan 27 18:42:41 crc kubenswrapper[4907]: I0127 18:42:41.040497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerDied","Data":"b66e0cb46aa6d3b6a9d5a90700a232ddc920e42446f3179883bfdd842fe9f9a6"} Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.535698 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.633920 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.634219 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.634318 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") pod \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\" (UID: \"b8f3066f-ed2e-42b5-94ff-e989771dbe8e\") " Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.639955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p" (OuterVolumeSpecName: "kube-api-access-d5z7p") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "kube-api-access-d5z7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.686595 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory" (OuterVolumeSpecName: "inventory") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.686700 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b8f3066f-ed2e-42b5-94ff-e989771dbe8e" (UID: "b8f3066f-ed2e-42b5-94ff-e989771dbe8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737068 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737103 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:42 crc kubenswrapper[4907]: I0127 18:42:42.737116 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5z7p\" (UniqueName: \"kubernetes.io/projected/b8f3066f-ed2e-42b5-94ff-e989771dbe8e-kube-api-access-d5z7p\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063132 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" event={"ID":"b8f3066f-ed2e-42b5-94ff-e989771dbe8e","Type":"ContainerDied","Data":"2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8"} Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063179 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2924331fffdead98570ab789a81c222bbd4941f62da52502196bdf4571e1c0f8" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.063239 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.161880 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162444 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162470 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162516 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162526 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162876 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162895 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162913 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="extract-utilities" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162931 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162937 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="extract-content" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162948 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162953 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: E0127 18:42:43.162962 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.162969 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163203 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcc99a8-ae36-4946-9470-e14bf668096c" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163219 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f3066f-ed2e-42b5-94ff-e989771dbe8e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.163235 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612ce7f-2609-418c-a907-fc9d4a14d650" containerName="registry-server" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.164075 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166373 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166700 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.166810 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.167673 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.185321 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248325 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.248764 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351074 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351166 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.351311 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.355536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.356922 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.371871 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"ssh-known-hosts-edpm-deployment-vlfg7\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:43 crc kubenswrapper[4907]: I0127 18:42:43.481771 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:44 crc kubenswrapper[4907]: I0127 18:42:44.011224 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vlfg7"] Jan 27 18:42:44 crc kubenswrapper[4907]: W0127 18:42:44.014833 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71334cb5_9354_4f68_91bf_8631e5fa045a.slice/crio-e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e WatchSource:0}: Error finding container e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e: Status 404 returned error can't find the container with id e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e Jan 27 18:42:44 crc kubenswrapper[4907]: I0127 18:42:44.079000 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerStarted","Data":"e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e"} Jan 27 18:42:46 crc kubenswrapper[4907]: I0127 18:42:46.104333 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerStarted","Data":"5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d"} Jan 27 18:42:46 crc kubenswrapper[4907]: I0127 18:42:46.133920 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" podStartSLOduration=1.5240840009999999 podStartE2EDuration="3.133890963s" podCreationTimestamp="2026-01-27 18:42:43 +0000 UTC" firstStartedPulling="2026-01-27 18:42:44.016871412 +0000 UTC m=+2219.146154024" lastFinishedPulling="2026-01-27 18:42:45.626678344 +0000 UTC m=+2220.755960986" observedRunningTime="2026-01-27 18:42:46.118713461 +0000 UTC m=+2221.247996133" watchObservedRunningTime="2026-01-27 18:42:46.133890963 +0000 UTC m=+2221.263173605" Jan 27 18:42:53 crc kubenswrapper[4907]: I0127 18:42:53.182326 4907 generic.go:334] "Generic (PLEG): container finished" podID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerID="5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d" exitCode=0 Jan 27 18:42:53 crc kubenswrapper[4907]: I0127 18:42:53.182427 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerDied","Data":"5284cf9eb0b5c0a7efa32633a348cc8288216b28a3edfc0c4808b4c05c8afb4d"} Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.682160 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.756648 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.756884 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.757068 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") pod \"71334cb5-9354-4f68-91bf-8631e5fa045a\" (UID: \"71334cb5-9354-4f68-91bf-8631e5fa045a\") " Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.762769 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8" (OuterVolumeSpecName: "kube-api-access-rp9c8") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "kube-api-access-rp9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.789108 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.790353 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "71334cb5-9354-4f68-91bf-8631e5fa045a" (UID: "71334cb5-9354-4f68-91bf-8631e5fa045a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860271 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp9c8\" (UniqueName: \"kubernetes.io/projected/71334cb5-9354-4f68-91bf-8631e5fa045a-kube-api-access-rp9c8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860319 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:54 crc kubenswrapper[4907]: I0127 18:42:54.860335 4907 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71334cb5-9354-4f68-91bf-8631e5fa045a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" event={"ID":"71334cb5-9354-4f68-91bf-8631e5fa045a","Type":"ContainerDied","Data":"e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e"} Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208600 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e57f4f15f4a758ef245cad7b5e1307a84d89939ff3cf5ae84e6fa147446da46e" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.208662 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vlfg7" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.279256 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:55 crc kubenswrapper[4907]: E0127 18:42:55.279792 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.279815 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.280053 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="71334cb5-9354-4f68-91bf-8631e5fa045a" containerName="ssh-known-hosts-edpm-deployment" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.283026 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.286380 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.286884 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.287107 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.287218 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.313095 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373285 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.373525 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.477795 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.477926 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.478090 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.482330 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.482417 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.505446 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-56b44\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:55 crc kubenswrapper[4907]: I0127 18:42:55.607305 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.204934 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44"] Jan 27 18:42:56 crc kubenswrapper[4907]: W0127 18:42:56.207039 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff08f4dc_f4e3_4e83_b922_32b6296fbee0.slice/crio-5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f WatchSource:0}: Error finding container 5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f: Status 404 returned error can't find the container with id 5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.222070 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerStarted","Data":"5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f"} Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.521274 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:42:56 crc kubenswrapper[4907]: I0127 18:42:56.521592 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4907]: I0127 18:42:57.238168 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerStarted","Data":"5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a"} Jan 27 18:42:57 crc kubenswrapper[4907]: I0127 18:42:57.270950 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" podStartSLOduration=1.847879176 podStartE2EDuration="2.270927754s" podCreationTimestamp="2026-01-27 18:42:55 +0000 UTC" firstStartedPulling="2026-01-27 18:42:56.212339886 +0000 UTC m=+2231.341622508" lastFinishedPulling="2026-01-27 18:42:56.635388474 +0000 UTC m=+2231.764671086" observedRunningTime="2026-01-27 18:42:57.257614764 +0000 UTC m=+2232.386897396" watchObservedRunningTime="2026-01-27 18:42:57.270927754 +0000 UTC m=+2232.400210376" Jan 27 18:43:06 crc kubenswrapper[4907]: I0127 18:43:06.346533 4907 generic.go:334] "Generic (PLEG): container finished" podID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerID="5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a" exitCode=0 Jan 27 18:43:06 crc kubenswrapper[4907]: I0127 18:43:06.346642 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerDied","Data":"5631c5ec197a33623d296e962c18c51f44af4a1f32c15531a4cce6284004356a"} Jan 27 18:43:07 crc kubenswrapper[4907]: I0127 18:43:07.892978 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019156 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019205 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.019249 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") pod \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\" (UID: \"ff08f4dc-f4e3-4e83-b922-32b6296fbee0\") " Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.028938 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9" (OuterVolumeSpecName: "kube-api-access-5phf9") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "kube-api-access-5phf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.074592 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.074682 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.085736 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory" (OuterVolumeSpecName: "inventory") pod "ff08f4dc-f4e3-4e83-b922-32b6296fbee0" (UID: "ff08f4dc-f4e3-4e83-b922-32b6296fbee0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.089972 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6xh4v"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122335 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122369 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.122380 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5phf9\" (UniqueName: \"kubernetes.io/projected/ff08f4dc-f4e3-4e83-b922-32b6296fbee0-kube-api-access-5phf9\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" event={"ID":"ff08f4dc-f4e3-4e83-b922-32b6296fbee0","Type":"ContainerDied","Data":"5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f"} Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372438 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c456830e6bc9f0c261ee9b6600de7fe1c50d3dc4f5e74b85fe8e0a31ec5e84f" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.372136 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-56b44" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.471215 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:08 crc kubenswrapper[4907]: E0127 18:43:08.472340 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.472362 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.472967 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff08f4dc-f4e3-4e83-b922-32b6296fbee0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.475336 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.478936 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480585 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480789 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.480935 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.504955 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.635950 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.636211 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.636508 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738677 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738779 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.738852 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.743695 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.744094 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.757238 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:08 crc kubenswrapper[4907]: I0127 18:43:08.802080 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.332266 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb"] Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.381842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerStarted","Data":"c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7"} Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.762486 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67fd41b-79b0-4ab4-86b6-816389597620" path="/var/lib/kubelet/pods/a67fd41b-79b0-4ab4-86b6-816389597620/volumes" Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.992721 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:09 crc kubenswrapper[4907]: I0127 18:43:09.996061 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.027028 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076817 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076890 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.076926 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179377 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179497 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.179585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.180147 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.180129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.197973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"certified-operators-vjvlr\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.322008 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.401256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerStarted","Data":"237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324"} Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.425768 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" podStartSLOduration=1.9737207890000001 podStartE2EDuration="2.425744194s" podCreationTimestamp="2026-01-27 18:43:08 +0000 UTC" firstStartedPulling="2026-01-27 18:43:09.338494128 +0000 UTC m=+2244.467776730" lastFinishedPulling="2026-01-27 18:43:09.790517523 +0000 UTC m=+2244.919800135" observedRunningTime="2026-01-27 18:43:10.42035898 +0000 UTC m=+2245.549641592" watchObservedRunningTime="2026-01-27 18:43:10.425744194 +0000 UTC m=+2245.555026806" Jan 27 18:43:10 crc kubenswrapper[4907]: I0127 18:43:10.905044 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413519 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" exitCode=0 Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f"} Jan 27 18:43:11 crc kubenswrapper[4907]: I0127 18:43:11.413867 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"557910ef70730e660beb16aea62a520fa40106c7bfd5cda588ce4a03758df88d"} Jan 27 18:43:13 crc kubenswrapper[4907]: I0127 18:43:13.438874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} Jan 27 18:43:14 crc kubenswrapper[4907]: I0127 18:43:14.455961 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" exitCode=0 Jan 27 18:43:14 crc kubenswrapper[4907]: I0127 18:43:14.456041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} Jan 27 18:43:15 crc kubenswrapper[4907]: I0127 18:43:15.483879 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerStarted","Data":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} Jan 27 18:43:15 crc kubenswrapper[4907]: I0127 18:43:15.524366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjvlr" podStartSLOduration=3.062846815 podStartE2EDuration="6.52434101s" podCreationTimestamp="2026-01-27 18:43:09 +0000 UTC" firstStartedPulling="2026-01-27 18:43:11.415419456 +0000 UTC m=+2246.544702068" lastFinishedPulling="2026-01-27 18:43:14.876913611 +0000 UTC m=+2250.006196263" observedRunningTime="2026-01-27 18:43:15.505926844 +0000 UTC m=+2250.635209456" watchObservedRunningTime="2026-01-27 18:43:15.52434101 +0000 UTC m=+2250.653623632" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.322423 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.323006 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.408020 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.543203 4907 generic.go:334] "Generic (PLEG): container finished" podID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerID="237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324" exitCode=0 Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.543286 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerDied","Data":"237af818d05778057c6ca186e5d04ce7b64817a943df2a45c5c24a67e6977324"} Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.615286 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:20 crc kubenswrapper[4907]: I0127 18:43:20.673162 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.039811 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.123771 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.123825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.124039 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") pod \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\" (UID: \"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a\") " Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.130327 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5" (OuterVolumeSpecName: "kube-api-access-khpc5") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "kube-api-access-khpc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.157793 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.163846 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory" (OuterVolumeSpecName: "inventory") pod "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" (UID: "9dcf4e25-6609-484b-98b6-a7c96c0a2c4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227170 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227202 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khpc5\" (UniqueName: \"kubernetes.io/projected/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-kube-api-access-khpc5\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.227211 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dcf4e25-6609-484b-98b6-a7c96c0a2c4a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.568910 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" event={"ID":"9dcf4e25-6609-484b-98b6-a7c96c0a2c4a","Type":"ContainerDied","Data":"c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7"} Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.569311 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ec331e7b04bc5a1509015ba342451b8d6a1e601b289cec9ac02e3402049da7" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.569091 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjvlr" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" containerID="cri-o://1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" gracePeriod=2 Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.568953 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.695634 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:22 crc kubenswrapper[4907]: E0127 18:43:22.696435 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.696459 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.696779 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcf4e25-6609-484b-98b6-a7c96c0a2c4a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.697787 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.701852 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702059 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702088 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702122 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702174 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702227 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702354 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702451 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.702543 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.744181 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840360 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840424 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840454 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840731 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840842 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.840872 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841020 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841083 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841125 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841232 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841254 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841364 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841459 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.841505 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943701 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943787 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943848 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943884 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943912 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943935 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.943953 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944292 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944374 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944400 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944492 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944578 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944623 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944720 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.944756 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.949576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.949919 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.951598 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.952451 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.952529 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.953161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.953743 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.955604 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956159 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956311 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.956516 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.957745 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.958170 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.961124 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.963813 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:22 crc kubenswrapper[4907]: I0127 18:43:22.968796 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-h47vc\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.055125 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.188720 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354263 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354421 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.354619 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") pod \"a7401251-23ae-4ff6-8e3f-b40f4d072626\" (UID: \"a7401251-23ae-4ff6-8e3f-b40f4d072626\") " Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.356317 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities" (OuterVolumeSpecName: "utilities") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.356752 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.374427 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st" (OuterVolumeSpecName: "kube-api-access-9n6st") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "kube-api-access-9n6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.460127 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6st\" (UniqueName: \"kubernetes.io/projected/a7401251-23ae-4ff6-8e3f-b40f4d072626-kube-api-access-9n6st\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588207 4907 generic.go:334] "Generic (PLEG): container finished" podID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" exitCode=0 Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588263 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588293 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjvlr" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588316 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjvlr" event={"ID":"a7401251-23ae-4ff6-8e3f-b40f4d072626","Type":"ContainerDied","Data":"557910ef70730e660beb16aea62a520fa40106c7bfd5cda588ce4a03758df88d"} Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.588351 4907 scope.go:117] "RemoveContainer" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.626142 4907 scope.go:117] "RemoveContainer" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.671461 4907 scope.go:117] "RemoveContainer" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.703892 4907 scope.go:117] "RemoveContainer" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.704420 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": container with ID starting with 1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb not found: ID does not exist" containerID="1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.704508 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb"} err="failed to get container status \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": rpc error: code = NotFound desc = could not find container \"1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb\": container with ID starting with 1b41fa2d154e6164c43b0e27e459cc179ab7626650bc26132d4c6d8c1b65bdbb not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.704649 4907 scope.go:117] "RemoveContainer" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.705110 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": container with ID starting with 62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475 not found: ID does not exist" containerID="62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705167 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475"} err="failed to get container status \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": rpc error: code = NotFound desc = could not find container \"62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475\": container with ID starting with 62fcc847a8e4d0ddbdc7006cf7bb37db076b61b77fb9d8c1ffd20d7c2dda8475 not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705204 4907 scope.go:117] "RemoveContainer" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: E0127 18:43:23.705665 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": container with ID starting with 96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f not found: ID does not exist" containerID="96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f" Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.705707 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f"} err="failed to get container status \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": rpc error: code = NotFound desc = could not find container \"96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f\": container with ID starting with 96457a94494bacc5a3958f9c64d2be5cb9dcf29a85e9589bd17d32559d93311f not found: ID does not exist" Jan 27 18:43:23 crc kubenswrapper[4907]: W0127 18:43:23.709303 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cadb1da_1dd2_49ac_a171_c672c006bfa8.slice/crio-1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b WatchSource:0}: Error finding container 1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b: Status 404 returned error can't find the container with id 1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b Jan 27 18:43:23 crc kubenswrapper[4907]: I0127 18:43:23.721106 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.129197 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7401251-23ae-4ff6-8e3f-b40f4d072626" (UID: "a7401251-23ae-4ff6-8e3f-b40f4d072626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.178434 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7401251-23ae-4ff6-8e3f-b40f4d072626-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.233509 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.247427 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjvlr"] Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.604159 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerStarted","Data":"2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed"} Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.604303 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerStarted","Data":"1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b"} Jan 27 18:43:24 crc kubenswrapper[4907]: I0127 18:43:24.630899 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" podStartSLOduration=2.059303322 podStartE2EDuration="2.630871867s" podCreationTimestamp="2026-01-27 18:43:22 +0000 UTC" firstStartedPulling="2026-01-27 18:43:23.71220498 +0000 UTC m=+2258.841487592" lastFinishedPulling="2026-01-27 18:43:24.283773495 +0000 UTC m=+2259.413056137" observedRunningTime="2026-01-27 18:43:24.623503727 +0000 UTC m=+2259.752786339" watchObservedRunningTime="2026-01-27 18:43:24.630871867 +0000 UTC m=+2259.760154519" Jan 27 18:43:25 crc kubenswrapper[4907]: I0127 18:43:25.769329 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" path="/var/lib/kubelet/pods/a7401251-23ae-4ff6-8e3f-b40f4d072626/volumes" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522028 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522138 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.522225 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.523950 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:43:26 crc kubenswrapper[4907]: I0127 18:43:26.524136 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" gracePeriod=600 Jan 27 18:43:26 crc kubenswrapper[4907]: E0127 18:43:26.651844 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648880 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" exitCode=0 Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648929 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1"} Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.648965 4907 scope.go:117] "RemoveContainer" containerID="659950f25293dd44f05a7437433bdb1b277bc9b532caa10ac47c8c5fa872cd61" Jan 27 18:43:27 crc kubenswrapper[4907]: I0127 18:43:27.649841 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:27 crc kubenswrapper[4907]: E0127 18:43:27.650148 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:39 crc kubenswrapper[4907]: I0127 18:43:39.748546 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:39 crc kubenswrapper[4907]: E0127 18:43:39.749782 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:52 crc kubenswrapper[4907]: I0127 18:43:52.748415 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:43:52 crc kubenswrapper[4907]: E0127 18:43:52.749186 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:43:59 crc kubenswrapper[4907]: I0127 18:43:59.997895 4907 scope.go:117] "RemoveContainer" containerID="fe15588b0331dbcfdd43e5562b4615a7d1e85094313a81d832de104826372490" Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.046026 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.059329 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zhncj"] Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.748527 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:07 crc kubenswrapper[4907]: E0127 18:44:07.748991 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:07 crc kubenswrapper[4907]: I0127 18:44:07.761689 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2938a8-fe59-4c5a-abd0-7957ecb6b796" path="/var/lib/kubelet/pods/ee2938a8-fe59-4c5a-abd0-7957ecb6b796/volumes" Jan 27 18:44:12 crc kubenswrapper[4907]: I0127 18:44:12.165209 4907 generic.go:334] "Generic (PLEG): container finished" podID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerID="2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed" exitCode=0 Jan 27 18:44:12 crc kubenswrapper[4907]: I0127 18:44:12.165314 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerDied","Data":"2c2a8f81dec4df057264b31b53aa2f39ed5678418cf5de50d0c7b97df45f8aed"} Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.706908 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750796 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750837 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750919 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750954 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.750971 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751046 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751085 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751118 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751139 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751162 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751186 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751207 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751270 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751366 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.751387 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") pod \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\" (UID: \"4cadb1da-1dd2-49ac-a171-c672c006bfa8\") " Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.778088 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.779523 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.780231 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782535 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782662 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd" (OuterVolumeSpecName: "kube-api-access-ksxrd") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "kube-api-access-ksxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782774 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.782863 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.785791 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.786294 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.788952 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.827767 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.827831 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.842924 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.843009 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory" (OuterVolumeSpecName: "inventory") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.843824 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.844659 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "4cadb1da-1dd2-49ac-a171-c672c006bfa8" (UID: "4cadb1da-1dd2-49ac-a171-c672c006bfa8"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854023 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854065 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksxrd\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-kube-api-access-ksxrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854076 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854089 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854099 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854109 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854119 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854128 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854136 4907 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854144 4907 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854154 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854164 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854173 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854182 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854192 4907 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cadb1da-1dd2-49ac-a171-c672c006bfa8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:13 crc kubenswrapper[4907]: I0127 18:44:13.854201 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4cadb1da-1dd2-49ac-a171-c672c006bfa8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.185956 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" event={"ID":"4cadb1da-1dd2-49ac-a171-c672c006bfa8","Type":"ContainerDied","Data":"1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b"} Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.186001 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d867dd76ec5f547e213c8253e19782959c537bbdc60a1f223ad5b12dd32403b" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.186063 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-h47vc" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.289510 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290097 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-utilities" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290114 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-utilities" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290130 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290138 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290160 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290166 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: E0127 18:44:14.290176 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-content" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290183 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="extract-content" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290364 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7401251-23ae-4ff6-8e3f-b40f4d072626" containerName="registry-server" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.290379 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cadb1da-1dd2-49ac-a171-c672c006bfa8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.291184 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.294290 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.294776 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.295018 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.296526 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.299713 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.305498 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369749 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369849 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.369910 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.370041 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.370079 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472073 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472181 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472241 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.472405 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.473355 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.477005 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.477197 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.480907 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.493836 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ldwl4\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:14 crc kubenswrapper[4907]: I0127 18:44:14.606721 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:44:15 crc kubenswrapper[4907]: I0127 18:44:15.178892 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4"] Jan 27 18:44:15 crc kubenswrapper[4907]: I0127 18:44:15.200041 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerStarted","Data":"848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca"} Jan 27 18:44:16 crc kubenswrapper[4907]: I0127 18:44:16.212296 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerStarted","Data":"72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e"} Jan 27 18:44:16 crc kubenswrapper[4907]: I0127 18:44:16.234972 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" podStartSLOduration=1.575798606 podStartE2EDuration="2.234951275s" podCreationTimestamp="2026-01-27 18:44:14 +0000 UTC" firstStartedPulling="2026-01-27 18:44:15.184806196 +0000 UTC m=+2310.314088808" lastFinishedPulling="2026-01-27 18:44:15.843958865 +0000 UTC m=+2310.973241477" observedRunningTime="2026-01-27 18:44:16.225722553 +0000 UTC m=+2311.355005175" watchObservedRunningTime="2026-01-27 18:44:16.234951275 +0000 UTC m=+2311.364233887" Jan 27 18:44:20 crc kubenswrapper[4907]: I0127 18:44:20.747979 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:20 crc kubenswrapper[4907]: E0127 18:44:20.748997 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:33 crc kubenswrapper[4907]: I0127 18:44:33.748826 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:33 crc kubenswrapper[4907]: E0127 18:44:33.750168 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:44:47 crc kubenswrapper[4907]: I0127 18:44:47.749166 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:44:47 crc kubenswrapper[4907]: E0127 18:44:47.750194 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.109791 4907 scope.go:117] "RemoveContainer" containerID="71edccfab69f94ffccb7125670bbcbccf2cbcbd3a33a02eb0595cd8175c5d918" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.183137 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.185036 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.187654 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.194488 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.196324 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.243449 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.243785 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.244001 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.346691 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.346876 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.347267 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.348020 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.353194 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.375374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"collect-profiles-29492325-rhbh6\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.520904 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:00 crc kubenswrapper[4907]: I0127 18:45:00.994861 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 18:45:00 crc kubenswrapper[4907]: W0127 18:45:00.996862 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8fea3de_b1db_4c31_8636_329b2d296f02.slice/crio-b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a WatchSource:0}: Error finding container b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a: Status 404 returned error can't find the container with id b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.729841 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerStarted","Data":"12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc"} Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.730201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerStarted","Data":"b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a"} Jan 27 18:45:01 crc kubenswrapper[4907]: I0127 18:45:01.749041 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:01 crc kubenswrapper[4907]: E0127 18:45:01.749555 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:02 crc kubenswrapper[4907]: I0127 18:45:02.741683 4907 generic.go:334] "Generic (PLEG): container finished" podID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerID="12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc" exitCode=0 Jan 27 18:45:02 crc kubenswrapper[4907]: I0127 18:45:02.742044 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerDied","Data":"12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc"} Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.161433 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.335473 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.335533 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.336661 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") pod \"a8fea3de-b1db-4c31-8636-329b2d296f02\" (UID: \"a8fea3de-b1db-4c31-8636-329b2d296f02\") " Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.337309 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.341219 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.371911 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs" (OuterVolumeSpecName: "kube-api-access-78tzs") pod "a8fea3de-b1db-4c31-8636-329b2d296f02" (UID: "a8fea3de-b1db-4c31-8636-329b2d296f02"). InnerVolumeSpecName "kube-api-access-78tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447433 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fea3de-b1db-4c31-8636-329b2d296f02-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447487 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78tzs\" (UniqueName: \"kubernetes.io/projected/a8fea3de-b1db-4c31-8636-329b2d296f02-kube-api-access-78tzs\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.447506 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8fea3de-b1db-4c31-8636-329b2d296f02-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.753070 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.771808 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6" event={"ID":"a8fea3de-b1db-4c31-8636-329b2d296f02","Type":"ContainerDied","Data":"b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a"} Jan 27 18:45:03 crc kubenswrapper[4907]: I0127 18:45:03.771965 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b222788e746aa0d2f6dd57f6a248fbdbaa70066f12c9dc20726c047784167f6a" Jan 27 18:45:04 crc kubenswrapper[4907]: I0127 18:45:04.260223 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:45:04 crc kubenswrapper[4907]: I0127 18:45:04.270672 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-hkhf5"] Jan 27 18:45:05 crc kubenswrapper[4907]: I0127 18:45:05.774162 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf" path="/var/lib/kubelet/pods/ea3a4626-8a1b-4c2f-a2d6-6d23684c96bf/volumes" Jan 27 18:45:15 crc kubenswrapper[4907]: I0127 18:45:15.748047 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:15 crc kubenswrapper[4907]: E0127 18:45:15.749226 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:25 crc kubenswrapper[4907]: I0127 18:45:25.004262 4907 generic.go:334] "Generic (PLEG): container finished" podID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerID="72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e" exitCode=0 Jan 27 18:45:25 crc kubenswrapper[4907]: I0127 18:45:25.004338 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerDied","Data":"72c1cd834ee86740416d86d919ee684eb50e58c8755fd63dfbb03b6b994e3c9e"} Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.542488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654214 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654440 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654495 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654623 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.654666 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") pod \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\" (UID: \"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7\") " Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.664923 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.665083 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2" (OuterVolumeSpecName: "kube-api-access-gzxc2") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "kube-api-access-gzxc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.696405 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.697129 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory" (OuterVolumeSpecName: "inventory") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.701709 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" (UID: "1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757201 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757231 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzxc2\" (UniqueName: \"kubernetes.io/projected/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-kube-api-access-gzxc2\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757242 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757252 4907 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:26 crc kubenswrapper[4907]: I0127 18:45:26.757262 4907 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.026745 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" event={"ID":"1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7","Type":"ContainerDied","Data":"848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca"} Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.027104 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848d76d694485867a5ed231277687643ca03ae0dcbec0944ab9e3db4d871eaca" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.026802 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ldwl4" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.202626 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:27 crc kubenswrapper[4907]: E0127 18:45:27.203409 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203431 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: E0127 18:45:27.203495 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203507 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203832 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" containerName="collect-profiles" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.203854 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.205112 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210105 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210228 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210344 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210422 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.210536 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.211906 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.216234 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.373755 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.373804 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377054 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377131 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377244 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.377331 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480286 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480316 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480402 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.480835 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.484900 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.484965 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.485745 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.500081 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.500392 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.505150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:27 crc kubenswrapper[4907]: I0127 18:45:27.557444 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.150644 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc"] Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.162281 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:45:28 crc kubenswrapper[4907]: I0127 18:45:28.749380 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:28 crc kubenswrapper[4907]: E0127 18:45:28.750032 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:29 crc kubenswrapper[4907]: I0127 18:45:29.054893 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerStarted","Data":"e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1"} Jan 27 18:45:30 crc kubenswrapper[4907]: I0127 18:45:30.064918 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerStarted","Data":"bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793"} Jan 27 18:45:30 crc kubenswrapper[4907]: I0127 18:45:30.107709 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" podStartSLOduration=1.9877750349999999 podStartE2EDuration="3.107681675s" podCreationTimestamp="2026-01-27 18:45:27 +0000 UTC" firstStartedPulling="2026-01-27 18:45:28.162036763 +0000 UTC m=+2383.291319375" lastFinishedPulling="2026-01-27 18:45:29.281943413 +0000 UTC m=+2384.411226015" observedRunningTime="2026-01-27 18:45:30.085183896 +0000 UTC m=+2385.214466508" watchObservedRunningTime="2026-01-27 18:45:30.107681675 +0000 UTC m=+2385.236964307" Jan 27 18:45:39 crc kubenswrapper[4907]: I0127 18:45:39.748773 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:39 crc kubenswrapper[4907]: E0127 18:45:39.749726 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:45:50 crc kubenswrapper[4907]: I0127 18:45:50.749025 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:45:50 crc kubenswrapper[4907]: E0127 18:45:50.749828 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:00 crc kubenswrapper[4907]: I0127 18:46:00.226134 4907 scope.go:117] "RemoveContainer" containerID="52e479a89219f19ceb319c0a0b04b0a15c0dea8abf0cf5c2205e3f54c150fd79" Jan 27 18:46:01 crc kubenswrapper[4907]: I0127 18:46:01.748983 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:01 crc kubenswrapper[4907]: E0127 18:46:01.751405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:15 crc kubenswrapper[4907]: I0127 18:46:15.759721 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:15 crc kubenswrapper[4907]: E0127 18:46:15.760749 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:22 crc kubenswrapper[4907]: I0127 18:46:22.617312 4907 generic.go:334] "Generic (PLEG): container finished" podID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerID="bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793" exitCode=0 Jan 27 18:46:22 crc kubenswrapper[4907]: I0127 18:46:22.617371 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerDied","Data":"bb391d40e733b99e5a006459969a6fbbe8617525abc0815470e85a67c237b793"} Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.176657 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281191 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281277 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281409 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281516 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281668 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.281777 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") pod \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\" (UID: \"30518ac3-ca77-4963-8ab9-1f0dd9c596eb\") " Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.290730 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.291077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn" (OuterVolumeSpecName: "kube-api-access-85wvn") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "kube-api-access-85wvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.321688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.323736 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.323818 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.325927 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory" (OuterVolumeSpecName: "inventory") pod "30518ac3-ca77-4963-8ab9-1f0dd9c596eb" (UID: "30518ac3-ca77-4963-8ab9-1f0dd9c596eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384323 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384355 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384365 4907 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384375 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85wvn\" (UniqueName: \"kubernetes.io/projected/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-kube-api-access-85wvn\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384386 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.384396 4907 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30518ac3-ca77-4963-8ab9-1f0dd9c596eb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.638833 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" event={"ID":"30518ac3-ca77-4963-8ab9-1f0dd9c596eb","Type":"ContainerDied","Data":"e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1"} Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.638874 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ec6eb1b56eca483aafca9fcbb1b07cb4e62783ded4f37152a6de06677afcc1" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.639198 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805002 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:24 crc kubenswrapper[4907]: E0127 18:46:24.805576 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805594 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.805848 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="30518ac3-ca77-4963-8ab9-1f0dd9c596eb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.806758 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809451 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809508 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809657 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.809686 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.810019 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.815449 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.894978 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.895690 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.895914 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.896078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.896331 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999207 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999483 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999522 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999594 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:24 crc kubenswrapper[4907]: I0127 18:46:24.999647 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.004325 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.006036 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.007576 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.008131 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.034381 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kxr54\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.124220 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:46:25 crc kubenswrapper[4907]: I0127 18:46:25.680932 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54"] Jan 27 18:46:26 crc kubenswrapper[4907]: I0127 18:46:26.658598 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerStarted","Data":"bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261"} Jan 27 18:46:27 crc kubenswrapper[4907]: I0127 18:46:27.749204 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:27 crc kubenswrapper[4907]: E0127 18:46:27.750118 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:28 crc kubenswrapper[4907]: I0127 18:46:28.683509 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerStarted","Data":"2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a"} Jan 27 18:46:28 crc kubenswrapper[4907]: I0127 18:46:28.710249 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" podStartSLOduration=2.728148676 podStartE2EDuration="4.710230443s" podCreationTimestamp="2026-01-27 18:46:24 +0000 UTC" firstStartedPulling="2026-01-27 18:46:25.684237495 +0000 UTC m=+2440.813520117" lastFinishedPulling="2026-01-27 18:46:27.666319272 +0000 UTC m=+2442.795601884" observedRunningTime="2026-01-27 18:46:28.700079744 +0000 UTC m=+2443.829362366" watchObservedRunningTime="2026-01-27 18:46:28.710230443 +0000 UTC m=+2443.839513055" Jan 27 18:46:38 crc kubenswrapper[4907]: I0127 18:46:38.747830 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:38 crc kubenswrapper[4907]: E0127 18:46:38.748664 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:46:50 crc kubenswrapper[4907]: I0127 18:46:50.749382 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:46:50 crc kubenswrapper[4907]: E0127 18:46:50.750140 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:01 crc kubenswrapper[4907]: I0127 18:47:01.751152 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:01 crc kubenswrapper[4907]: E0127 18:47:01.752243 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:16 crc kubenswrapper[4907]: I0127 18:47:16.748776 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:16 crc kubenswrapper[4907]: E0127 18:47:16.749364 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:28 crc kubenswrapper[4907]: I0127 18:47:28.748264 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:28 crc kubenswrapper[4907]: E0127 18:47:28.749118 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:39 crc kubenswrapper[4907]: I0127 18:47:39.748912 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:39 crc kubenswrapper[4907]: E0127 18:47:39.749968 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:47:51 crc kubenswrapper[4907]: I0127 18:47:51.748311 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:47:51 crc kubenswrapper[4907]: E0127 18:47:51.749624 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:03 crc kubenswrapper[4907]: I0127 18:48:03.748019 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:03 crc kubenswrapper[4907]: E0127 18:48:03.748969 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:17 crc kubenswrapper[4907]: I0127 18:48:17.749176 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:17 crc kubenswrapper[4907]: E0127 18:48:17.750699 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:48:28 crc kubenswrapper[4907]: I0127 18:48:28.749887 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:48:29 crc kubenswrapper[4907]: I0127 18:48:29.065415 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} Jan 27 18:50:56 crc kubenswrapper[4907]: I0127 18:50:56.521514 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:50:56 crc kubenswrapper[4907]: I0127 18:50:56.522215 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.586206 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.589481 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.605317 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712045 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712128 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.712286 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816463 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816595 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816894 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.816973 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.817323 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.838121 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"community-operators-wp8d9\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:50:59 crc kubenswrapper[4907]: I0127 18:50:59.926163 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:00 crc kubenswrapper[4907]: W0127 18:51:00.537126 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8f3d6c_3ea2_41ea_ac13_b12933bf4878.slice/crio-34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d WatchSource:0}: Error finding container 34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d: Status 404 returned error can't find the container with id 34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d Jan 27 18:51:00 crc kubenswrapper[4907]: I0127 18:51:00.538385 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:00 crc kubenswrapper[4907]: I0127 18:51:00.727423 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d"} Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.744063 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" exitCode=0 Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.744139 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd"} Jan 27 18:51:01 crc kubenswrapper[4907]: I0127 18:51:01.751095 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:51:03 crc kubenswrapper[4907]: I0127 18:51:03.862438 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} Jan 27 18:51:05 crc kubenswrapper[4907]: I0127 18:51:05.886331 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" exitCode=0 Jan 27 18:51:05 crc kubenswrapper[4907]: I0127 18:51:05.886414 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} Jan 27 18:51:06 crc kubenswrapper[4907]: I0127 18:51:06.934107 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerStarted","Data":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} Jan 27 18:51:06 crc kubenswrapper[4907]: I0127 18:51:06.958107 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wp8d9" podStartSLOduration=3.421455706 podStartE2EDuration="7.958085431s" podCreationTimestamp="2026-01-27 18:50:59 +0000 UTC" firstStartedPulling="2026-01-27 18:51:01.750696059 +0000 UTC m=+2716.879978671" lastFinishedPulling="2026-01-27 18:51:06.287325784 +0000 UTC m=+2721.416608396" observedRunningTime="2026-01-27 18:51:06.952627026 +0000 UTC m=+2722.081909658" watchObservedRunningTime="2026-01-27 18:51:06.958085431 +0000 UTC m=+2722.087368033" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.927741 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.928296 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:09 crc kubenswrapper[4907]: I0127 18:51:09.977169 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:19 crc kubenswrapper[4907]: I0127 18:51:19.984394 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.051709 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.082276 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wp8d9" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" containerID="cri-o://4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" gracePeriod=2 Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.666629 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738497 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738773 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.738890 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") pod \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\" (UID: \"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878\") " Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.740843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities" (OuterVolumeSpecName: "utilities") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.746449 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p" (OuterVolumeSpecName: "kube-api-access-m6h2p") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "kube-api-access-m6h2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.799165 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" (UID: "aa8f3d6c-3ea2-41ea-ac13-b12933bf4878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843582 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843969 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6h2p\" (UniqueName: \"kubernetes.io/projected/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-kube-api-access-m6h2p\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:20 crc kubenswrapper[4907]: I0127 18:51:20.843986 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100282 4907 generic.go:334] "Generic (PLEG): container finished" podID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" exitCode=0 Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100323 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100355 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wp8d9" event={"ID":"aa8f3d6c-3ea2-41ea-ac13-b12933bf4878","Type":"ContainerDied","Data":"34520b899b4799f115fafc1163d4fef1a83f47cd18a3bf803ae0ae4b8e7d428d"} Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100377 4907 scope.go:117] "RemoveContainer" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.100382 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wp8d9" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.139677 4907 scope.go:117] "RemoveContainer" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.149068 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.161073 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wp8d9"] Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.177587 4907 scope.go:117] "RemoveContainer" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.239424 4907 scope.go:117] "RemoveContainer" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240030 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": container with ID starting with 4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19 not found: ID does not exist" containerID="4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240075 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19"} err="failed to get container status \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": rpc error: code = NotFound desc = could not find container \"4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19\": container with ID starting with 4cb25e4d006b311e3288f96711623ce0054841e977d40da20a54f5cf36c32b19 not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240100 4907 scope.go:117] "RemoveContainer" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240415 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": container with ID starting with 0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb not found: ID does not exist" containerID="0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240448 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb"} err="failed to get container status \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": rpc error: code = NotFound desc = could not find container \"0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb\": container with ID starting with 0e49830d537c49f8a9fade06c9578811a9e2981790ead719b8bebde5d9c487eb not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240487 4907 scope.go:117] "RemoveContainer" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: E0127 18:51:21.240734 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": container with ID starting with d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd not found: ID does not exist" containerID="d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.240756 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd"} err="failed to get container status \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": rpc error: code = NotFound desc = could not find container \"d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd\": container with ID starting with d8d3b2d1de2533779a35d9fe8ed8f7083a516d4b1a2fa51dfd413adc8ac6cefd not found: ID does not exist" Jan 27 18:51:21 crc kubenswrapper[4907]: I0127 18:51:21.767546 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" path="/var/lib/kubelet/pods/aa8f3d6c-3ea2-41ea-ac13-b12933bf4878/volumes" Jan 27 18:51:26 crc kubenswrapper[4907]: I0127 18:51:26.521234 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:26 crc kubenswrapper[4907]: I0127 18:51:26.521824 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.605846 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607307 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607329 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607395 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-utilities" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607405 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-utilities" Jan 27 18:51:54 crc kubenswrapper[4907]: E0127 18:51:54.607413 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-content" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607424 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="extract-content" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.607833 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8f3d6c-3ea2-41ea-ac13-b12933bf4878" containerName="registry-server" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.610246 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.622911 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682454 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.682654 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785307 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785507 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.785817 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.786520 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.786618 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.815310 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"redhat-operators-xgqxb\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:54 crc kubenswrapper[4907]: I0127 18:51:54.934592 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:51:55 crc kubenswrapper[4907]: I0127 18:51:55.419950 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:51:55 crc kubenswrapper[4907]: I0127 18:51:55.453920 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"d030af88e069347ddd4b826a8d4f64433f67d574c407eadd28610f00671b7b2e"} Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855321 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855624 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.855672 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.856676 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.856734 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" gracePeriod=600 Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.901277 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" exitCode=0 Jan 27 18:51:56 crc kubenswrapper[4907]: I0127 18:51:56.901663 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994322 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" exitCode=0 Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994412 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994913 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.994940 4907 scope.go:117] "RemoveContainer" containerID="30966e8cf4bb733b6b3293452afdb6f988d3db2a4e1a8d9f06b6298e1c23e5d1" Jan 27 18:51:58 crc kubenswrapper[4907]: I0127 18:51:58.997601 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} Jan 27 18:52:05 crc kubenswrapper[4907]: I0127 18:52:05.184661 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" exitCode=0 Jan 27 18:52:05 crc kubenswrapper[4907]: I0127 18:52:05.184714 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} Jan 27 18:52:06 crc kubenswrapper[4907]: I0127 18:52:06.200120 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerStarted","Data":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} Jan 27 18:52:06 crc kubenswrapper[4907]: I0127 18:52:06.224666 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xgqxb" podStartSLOduration=3.500593011 podStartE2EDuration="12.224648902s" podCreationTimestamp="2026-01-27 18:51:54 +0000 UTC" firstStartedPulling="2026-01-27 18:51:56.905052034 +0000 UTC m=+2772.034334646" lastFinishedPulling="2026-01-27 18:52:05.629107935 +0000 UTC m=+2780.758390537" observedRunningTime="2026-01-27 18:52:06.220966938 +0000 UTC m=+2781.350249560" watchObservedRunningTime="2026-01-27 18:52:06.224648902 +0000 UTC m=+2781.353931514" Jan 27 18:52:14 crc kubenswrapper[4907]: I0127 18:52:14.934764 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:14 crc kubenswrapper[4907]: I0127 18:52:14.935713 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.539169 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.543324 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.573165 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734362 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734456 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.734493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836532 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836658 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.836705 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.837129 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.837386 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.857584 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"redhat-marketplace-9dj2r\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:15 crc kubenswrapper[4907]: I0127 18:52:15.869911 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:16 crc kubenswrapper[4907]: I0127 18:52:16.024625 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:52:16 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:52:16 crc kubenswrapper[4907]: > Jan 27 18:52:16 crc kubenswrapper[4907]: I0127 18:52:16.405393 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326150 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" exitCode=0 Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326200 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a"} Jan 27 18:52:17 crc kubenswrapper[4907]: I0127 18:52:17.326680 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"7f8195908aa09dec9904b1e2fcfaf41d8cc54cfdf36b2b2802d16bd2dd4d46fd"} Jan 27 18:52:18 crc kubenswrapper[4907]: I0127 18:52:18.337786 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} Jan 27 18:52:20 crc kubenswrapper[4907]: I0127 18:52:20.385688 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" exitCode=0 Jan 27 18:52:20 crc kubenswrapper[4907]: I0127 18:52:20.385737 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} Jan 27 18:52:21 crc kubenswrapper[4907]: I0127 18:52:21.399225 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerStarted","Data":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} Jan 27 18:52:21 crc kubenswrapper[4907]: I0127 18:52:21.428162 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9dj2r" podStartSLOduration=2.929622184 podStartE2EDuration="6.428135257s" podCreationTimestamp="2026-01-27 18:52:15 +0000 UTC" firstStartedPulling="2026-01-27 18:52:17.329373384 +0000 UTC m=+2792.458655986" lastFinishedPulling="2026-01-27 18:52:20.827886437 +0000 UTC m=+2795.957169059" observedRunningTime="2026-01-27 18:52:21.418501725 +0000 UTC m=+2796.547784337" watchObservedRunningTime="2026-01-27 18:52:21.428135257 +0000 UTC m=+2796.557417869" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.870787 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.871425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.919778 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:25 crc kubenswrapper[4907]: I0127 18:52:25.989596 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" probeResult="failure" output=< Jan 27 18:52:25 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 18:52:25 crc kubenswrapper[4907]: > Jan 27 18:52:26 crc kubenswrapper[4907]: I0127 18:52:26.537468 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:26 crc kubenswrapper[4907]: I0127 18:52:26.590419 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:28 crc kubenswrapper[4907]: I0127 18:52:28.490587 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9dj2r" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" containerID="cri-o://5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" gracePeriod=2 Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.023449 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.183898 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184211 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184241 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") pod \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\" (UID: \"5ed1dd13-007f-48ad-9dc9-6870f507c44e\") " Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.184857 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities" (OuterVolumeSpecName: "utilities") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.190242 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g" (OuterVolumeSpecName: "kube-api-access-29m5g") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "kube-api-access-29m5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.211312 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ed1dd13-007f-48ad-9dc9-6870f507c44e" (UID: "5ed1dd13-007f-48ad-9dc9-6870f507c44e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.286981 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.287015 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m5g\" (UniqueName: \"kubernetes.io/projected/5ed1dd13-007f-48ad-9dc9-6870f507c44e-kube-api-access-29m5g\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.287026 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed1dd13-007f-48ad-9dc9-6870f507c44e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.501938 4907 generic.go:334] "Generic (PLEG): container finished" podID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" exitCode=0 Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.501977 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502002 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9dj2r" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502010 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9dj2r" event={"ID":"5ed1dd13-007f-48ad-9dc9-6870f507c44e","Type":"ContainerDied","Data":"7f8195908aa09dec9904b1e2fcfaf41d8cc54cfdf36b2b2802d16bd2dd4d46fd"} Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.502044 4907 scope.go:117] "RemoveContainer" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.529130 4907 scope.go:117] "RemoveContainer" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.543048 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.559785 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9dj2r"] Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.571725 4907 scope.go:117] "RemoveContainer" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642184 4907 scope.go:117] "RemoveContainer" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.642718 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": container with ID starting with 5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4 not found: ID does not exist" containerID="5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642759 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4"} err="failed to get container status \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": rpc error: code = NotFound desc = could not find container \"5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4\": container with ID starting with 5aacf5511c142a58ffa77e0b7a21da5d30e292782b698514220bf29aea6ce9f4 not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.642785 4907 scope.go:117] "RemoveContainer" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.643192 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": container with ID starting with edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0 not found: ID does not exist" containerID="edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643237 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0"} err="failed to get container status \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": rpc error: code = NotFound desc = could not find container \"edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0\": container with ID starting with edfda1fe60fa1fb108b0bb59c47b0ddb4f769c5e2472b18c527f5a682bcfd1f0 not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643268 4907 scope.go:117] "RemoveContainer" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: E0127 18:52:29.643587 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": container with ID starting with 5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a not found: ID does not exist" containerID="5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.643613 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a"} err="failed to get container status \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": rpc error: code = NotFound desc = could not find container \"5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a\": container with ID starting with 5bd4f27f26999408bce9f3690a11a536b752630079f2bbb00fc47a3cc7732c6a not found: ID does not exist" Jan 27 18:52:29 crc kubenswrapper[4907]: I0127 18:52:29.759940 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" path="/var/lib/kubelet/pods/5ed1dd13-007f-48ad-9dc9-6870f507c44e/volumes" Jan 27 18:52:34 crc kubenswrapper[4907]: I0127 18:52:34.995581 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:35 crc kubenswrapper[4907]: I0127 18:52:35.051069 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:35 crc kubenswrapper[4907]: I0127 18:52:35.233627 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:36 crc kubenswrapper[4907]: I0127 18:52:36.571046 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xgqxb" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" containerID="cri-o://29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" gracePeriod=2 Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.102202 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.272640 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.272905 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.273021 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") pod \"9c595f48-fb2b-4908-ad96-8607334515b9\" (UID: \"9c595f48-fb2b-4908-ad96-8607334515b9\") " Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.273708 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities" (OuterVolumeSpecName: "utilities") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.283918 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx" (OuterVolumeSpecName: "kube-api-access-pppcx") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "kube-api-access-pppcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.375173 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.375198 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pppcx\" (UniqueName: \"kubernetes.io/projected/9c595f48-fb2b-4908-ad96-8607334515b9-kube-api-access-pppcx\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.398308 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c595f48-fb2b-4908-ad96-8607334515b9" (UID: "9c595f48-fb2b-4908-ad96-8607334515b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.477121 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c595f48-fb2b-4908-ad96-8607334515b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586451 4907 generic.go:334] "Generic (PLEG): container finished" podID="9c595f48-fb2b-4908-ad96-8607334515b9" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" exitCode=0 Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586501 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586528 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgqxb" event={"ID":"9c595f48-fb2b-4908-ad96-8607334515b9","Type":"ContainerDied","Data":"d030af88e069347ddd4b826a8d4f64433f67d574c407eadd28610f00671b7b2e"} Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586542 4907 scope.go:117] "RemoveContainer" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.586815 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgqxb" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.610037 4907 scope.go:117] "RemoveContainer" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.640904 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.651295 4907 scope.go:117] "RemoveContainer" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.655621 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xgqxb"] Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.707782 4907 scope.go:117] "RemoveContainer" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.708249 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": container with ID starting with 29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987 not found: ID does not exist" containerID="29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708298 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987"} err="failed to get container status \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": rpc error: code = NotFound desc = could not find container \"29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987\": container with ID starting with 29e51496b162132df44a1d96c7c3f3e120b0d3e444ae4ebf558e7a024f82a987 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708329 4907 scope.go:117] "RemoveContainer" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.708726 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": container with ID starting with 3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22 not found: ID does not exist" containerID="3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708775 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22"} err="failed to get container status \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": rpc error: code = NotFound desc = could not find container \"3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22\": container with ID starting with 3f901bed7267dd8d298b9b2aa139ae81eb71effbf11ef2862c42ab91c7ba0f22 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.708802 4907 scope.go:117] "RemoveContainer" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: E0127 18:52:37.709187 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": container with ID starting with e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100 not found: ID does not exist" containerID="e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.709258 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100"} err="failed to get container status \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": rpc error: code = NotFound desc = could not find container \"e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100\": container with ID starting with e073a9674480fa8362dfc7e8ddbab2d61b515591ca9af344d99a7cb07ebcd100 not found: ID does not exist" Jan 27 18:52:37 crc kubenswrapper[4907]: I0127 18:52:37.761892 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" path="/var/lib/kubelet/pods/9c595f48-fb2b-4908-ad96-8607334515b9/volumes" Jan 27 18:52:58 crc kubenswrapper[4907]: I0127 18:52:58.789115 4907 generic.go:334] "Generic (PLEG): container finished" podID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerID="2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a" exitCode=0 Jan 27 18:52:58 crc kubenswrapper[4907]: I0127 18:52:58.789181 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerDied","Data":"2ac918adbc99a2af2c22900e474cfa052487fa6b6d98d9b74ea574cba87e924a"} Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.371891 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410500 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410700 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.410802 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.411005 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.411045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") pod \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\" (UID: \"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011\") " Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.416690 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.417001 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4" (OuterVolumeSpecName: "kube-api-access-gwtv4") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "kube-api-access-gwtv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.447974 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.465640 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.468786 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory" (OuterVolumeSpecName: "inventory") pod "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" (UID: "a1ab6c99-0bb2-45ca-9dc8-1d6da396d011"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519385 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519423 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519437 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519446 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwtv4\" (UniqueName: \"kubernetes.io/projected/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-kube-api-access-gwtv4\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.519455 4907 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ab6c99-0bb2-45ca-9dc8-1d6da396d011-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813062 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" event={"ID":"a1ab6c99-0bb2-45ca-9dc8-1d6da396d011","Type":"ContainerDied","Data":"bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261"} Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813111 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf713b0078b30d25d8341c53102c3f52fc6eae1c45613502e03acdfc202e261" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.813178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kxr54" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953062 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953607 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953623 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953638 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953645 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953665 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953670 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="extract-utilities" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953686 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953693 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953707 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953713 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953731 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953736 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: E0127 18:53:00.953755 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953761 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="extract-content" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953954 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ab6c99-0bb2-45ca-9dc8-1d6da396d011" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953975 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c595f48-fb2b-4908-ad96-8607334515b9" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.953990 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed1dd13-007f-48ad-9dc9-6870f507c44e" containerName="registry-server" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.954879 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958364 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958594 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.958933 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959043 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959284 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959478 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.959779 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:53:00 crc kubenswrapper[4907]: I0127 18:53:00.981209 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035359 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035414 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035473 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035537 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035666 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035697 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035728 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035756 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.035795 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.137885 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.137948 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138009 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138086 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138139 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138189 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138247 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.138273 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.139023 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143331 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143830 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.143929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.144407 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.144536 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.145110 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.147452 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.154373 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cjhx8\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.272212 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.808260 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8"] Jan 27 18:53:01 crc kubenswrapper[4907]: I0127 18:53:01.824101 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerStarted","Data":"0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605"} Jan 27 18:53:03 crc kubenswrapper[4907]: I0127 18:53:03.850677 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerStarted","Data":"0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3"} Jan 27 18:53:03 crc kubenswrapper[4907]: I0127 18:53:03.883587 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" podStartSLOduration=3.269607013 podStartE2EDuration="3.883563771s" podCreationTimestamp="2026-01-27 18:53:00 +0000 UTC" firstStartedPulling="2026-01-27 18:53:01.809350188 +0000 UTC m=+2836.938632800" lastFinishedPulling="2026-01-27 18:53:02.423306946 +0000 UTC m=+2837.552589558" observedRunningTime="2026-01-27 18:53:03.871191172 +0000 UTC m=+2839.000473794" watchObservedRunningTime="2026-01-27 18:53:03.883563771 +0000 UTC m=+2839.012846383" Jan 27 18:54:26 crc kubenswrapper[4907]: I0127 18:54:26.521259 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:26 crc kubenswrapper[4907]: I0127 18:54:26.521870 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:54:56 crc kubenswrapper[4907]: I0127 18:54:56.528206 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:56 crc kubenswrapper[4907]: I0127 18:54:56.529966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.520861 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.521437 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.521500 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.522621 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:55:26 crc kubenswrapper[4907]: I0127 18:55:26.522677 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" gracePeriod=600 Jan 27 18:55:26 crc kubenswrapper[4907]: E0127 18:55:26.641021 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.401616 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" exitCode=0 Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.401665 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95"} Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.402373 4907 scope.go:117] "RemoveContainer" containerID="5a8e7941f14e8200146341973fa546392ff2b8ad7577aa41d9bc85766266bee5" Jan 27 18:55:27 crc kubenswrapper[4907]: I0127 18:55:27.403475 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:27 crc kubenswrapper[4907]: E0127 18:55:27.404035 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:30 crc kubenswrapper[4907]: I0127 18:55:30.440504 4907 generic.go:334] "Generic (PLEG): container finished" podID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerID="0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3" exitCode=0 Jan 27 18:55:30 crc kubenswrapper[4907]: I0127 18:55:30.440587 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerDied","Data":"0ba51ce4afa559b311ee9f6eaceb7a46aae49217e1cccf0e0a8988f1b288a6a3"} Jan 27 18:55:31 crc kubenswrapper[4907]: I0127 18:55:31.900488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009730 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009828 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.009943 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010045 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010073 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010104 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010202 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.010331 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") pod \"c3ad9414-0787-40c9-a907-d59ec160f1dd\" (UID: \"c3ad9414-0787-40c9-a907-d59ec160f1dd\") " Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.017620 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.019016 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9" (OuterVolumeSpecName: "kube-api-access-5glf9") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "kube-api-access-5glf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.045978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory" (OuterVolumeSpecName: "inventory") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.046364 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.049177 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.058200 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.058667 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.060977 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.065269 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c3ad9414-0787-40c9-a907-d59ec160f1dd" (UID: "c3ad9414-0787-40c9-a907-d59ec160f1dd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114083 4907 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114128 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114139 4907 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114153 4907 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114163 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5glf9\" (UniqueName: \"kubernetes.io/projected/c3ad9414-0787-40c9-a907-d59ec160f1dd-kube-api-access-5glf9\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114176 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114190 4907 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114201 4907 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.114211 4907 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c3ad9414-0787-40c9-a907-d59ec160f1dd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.464932 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" event={"ID":"c3ad9414-0787-40c9-a907-d59ec160f1dd","Type":"ContainerDied","Data":"0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605"} Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.464987 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f69b8125dc9fcdd9b2cd9e4ff8f682a137335ebb73db521005819dd12103605" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.465047 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cjhx8" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.580298 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:32 crc kubenswrapper[4907]: E0127 18:55:32.581255 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.581282 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.581591 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ad9414-0787-40c9-a907-d59ec160f1dd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.582614 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588501 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588659 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588819 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.588848 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.589030 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.594329 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741112 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741200 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741262 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741283 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741348 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.741660 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844324 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844408 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844437 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844528 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844589 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844674 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.844809 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.851700 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.851914 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.852177 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853305 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853471 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.853789 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.863724 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-98h4r\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:32 crc kubenswrapper[4907]: I0127 18:55:32.900634 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:55:33 crc kubenswrapper[4907]: I0127 18:55:33.479946 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r"] Jan 27 18:55:34 crc kubenswrapper[4907]: I0127 18:55:34.492302 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerStarted","Data":"95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24"} Jan 27 18:55:34 crc kubenswrapper[4907]: I0127 18:55:34.492955 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerStarted","Data":"04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8"} Jan 27 18:55:35 crc kubenswrapper[4907]: I0127 18:55:35.528295 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" podStartSLOduration=2.963293499 podStartE2EDuration="3.528278562s" podCreationTimestamp="2026-01-27 18:55:32 +0000 UTC" firstStartedPulling="2026-01-27 18:55:33.498554076 +0000 UTC m=+2988.627836688" lastFinishedPulling="2026-01-27 18:55:34.063539129 +0000 UTC m=+2989.192821751" observedRunningTime="2026-01-27 18:55:35.518151895 +0000 UTC m=+2990.647434517" watchObservedRunningTime="2026-01-27 18:55:35.528278562 +0000 UTC m=+2990.657561174" Jan 27 18:55:41 crc kubenswrapper[4907]: I0127 18:55:41.748660 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:41 crc kubenswrapper[4907]: E0127 18:55:41.749494 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:55:55 crc kubenswrapper[4907]: I0127 18:55:55.767769 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:55:55 crc kubenswrapper[4907]: E0127 18:55:55.768800 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:08 crc kubenswrapper[4907]: I0127 18:56:08.748907 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:08 crc kubenswrapper[4907]: E0127 18:56:08.751023 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:23 crc kubenswrapper[4907]: I0127 18:56:23.748636 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:23 crc kubenswrapper[4907]: E0127 18:56:23.749569 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:36 crc kubenswrapper[4907]: I0127 18:56:36.748493 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:36 crc kubenswrapper[4907]: E0127 18:56:36.749327 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:56:49 crc kubenswrapper[4907]: I0127 18:56:49.749018 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:56:49 crc kubenswrapper[4907]: E0127 18:56:49.749777 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:04 crc kubenswrapper[4907]: I0127 18:57:04.749190 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:04 crc kubenswrapper[4907]: E0127 18:57:04.750158 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:18 crc kubenswrapper[4907]: I0127 18:57:18.749119 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:18 crc kubenswrapper[4907]: E0127 18:57:18.750426 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:29 crc kubenswrapper[4907]: I0127 18:57:29.748775 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:29 crc kubenswrapper[4907]: E0127 18:57:29.749655 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:40 crc kubenswrapper[4907]: I0127 18:57:40.747996 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:40 crc kubenswrapper[4907]: E0127 18:57:40.748870 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:57:53 crc kubenswrapper[4907]: I0127 18:57:53.749327 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:57:53 crc kubenswrapper[4907]: E0127 18:57:53.750807 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.237857 4907 generic.go:334] "Generic (PLEG): container finished" podID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerID="95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24" exitCode=0 Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.237963 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerDied","Data":"95484d5be57abef73f90b14257482a1502bdef8c589d8adbaf0313f7948cde24"} Jan 27 18:58:08 crc kubenswrapper[4907]: I0127 18:58:08.748525 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:08 crc kubenswrapper[4907]: E0127 18:58:08.749019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.743748 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.884887 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885077 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885214 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885307 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885393 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885436 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.885468 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") pod \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\" (UID: \"fbb41855-75d9-4678-8e5c-7602c99dbf1c\") " Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.891696 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5" (OuterVolumeSpecName: "kube-api-access-tf5t5") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "kube-api-access-tf5t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.891868 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.922361 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.926761 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.928112 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.928978 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory" (OuterVolumeSpecName: "inventory") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.931727 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbb41855-75d9-4678-8e5c-7602c99dbf1c" (UID: "fbb41855-75d9-4678-8e5c-7602c99dbf1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990436 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990487 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf5t5\" (UniqueName: \"kubernetes.io/projected/fbb41855-75d9-4678-8e5c-7602c99dbf1c-kube-api-access-tf5t5\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990505 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990519 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990532 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990542 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:09 crc kubenswrapper[4907]: I0127 18:58:09.990567 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41855-75d9-4678-8e5c-7602c99dbf1c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.289803 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" event={"ID":"fbb41855-75d9-4678-8e5c-7602c99dbf1c","Type":"ContainerDied","Data":"04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8"} Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.290150 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04012c9b46ce41e496b0c16fe093a9bc28ba197166cbebb5afd7f059c6e4a1d8" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.290071 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-98h4r" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.366755 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:10 crc kubenswrapper[4907]: E0127 18:58:10.367640 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.367666 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.367961 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb41855-75d9-4678-8e5c-7602c99dbf1c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.369231 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.375790 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376078 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376201 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.376937 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.379897 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.387070 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.503639 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.503710 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504095 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504169 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504445 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504501 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.504576 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606820 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606902 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606932 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.606977 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607003 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607088 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.607133 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.612829 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.612977 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.613104 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.613615 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.614151 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.619142 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.624371 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:10 crc kubenswrapper[4907]: I0127 18:58:10.687526 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 18:58:11 crc kubenswrapper[4907]: I0127 18:58:11.335192 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx"] Jan 27 18:58:11 crc kubenswrapper[4907]: I0127 18:58:11.340543 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.311184 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerStarted","Data":"de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23"} Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.312307 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerStarted","Data":"31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390"} Jan 27 18:58:12 crc kubenswrapper[4907]: I0127 18:58:12.348366 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" podStartSLOduration=1.827527863 podStartE2EDuration="2.348344368s" podCreationTimestamp="2026-01-27 18:58:10 +0000 UTC" firstStartedPulling="2026-01-27 18:58:11.340339926 +0000 UTC m=+3146.469622538" lastFinishedPulling="2026-01-27 18:58:11.861156441 +0000 UTC m=+3146.990439043" observedRunningTime="2026-01-27 18:58:12.3296353 +0000 UTC m=+3147.458917922" watchObservedRunningTime="2026-01-27 18:58:12.348344368 +0000 UTC m=+3147.477626990" Jan 27 18:58:22 crc kubenswrapper[4907]: I0127 18:58:22.748251 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:22 crc kubenswrapper[4907]: E0127 18:58:22.749204 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:33 crc kubenswrapper[4907]: I0127 18:58:33.747877 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:33 crc kubenswrapper[4907]: E0127 18:58:33.748898 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:44 crc kubenswrapper[4907]: I0127 18:58:44.748461 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:44 crc kubenswrapper[4907]: E0127 18:58:44.749475 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:58:58 crc kubenswrapper[4907]: I0127 18:58:58.749262 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:58:58 crc kubenswrapper[4907]: E0127 18:58:58.750199 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:10 crc kubenswrapper[4907]: I0127 18:59:10.748188 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:10 crc kubenswrapper[4907]: E0127 18:59:10.749341 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.917139 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.920794 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:12 crc kubenswrapper[4907]: I0127 18:59:12.954146 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018649 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018725 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.018790 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121106 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121198 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121247 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.121890 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.122046 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.151685 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"certified-operators-mhl9t\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.251468 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.796240 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:13 crc kubenswrapper[4907]: W0127 18:59:13.809702 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2b1f50_883c_4fe6_8134_b228bae26b02.slice/crio-ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd WatchSource:0}: Error finding container ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd: Status 404 returned error can't find the container with id ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd Jan 27 18:59:13 crc kubenswrapper[4907]: I0127 18:59:13.986202 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd"} Jan 27 18:59:15 crc kubenswrapper[4907]: I0127 18:59:15.000754 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" exitCode=0 Jan 27 18:59:15 crc kubenswrapper[4907]: I0127 18:59:15.000807 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478"} Jan 27 18:59:16 crc kubenswrapper[4907]: I0127 18:59:16.013761 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} Jan 27 18:59:18 crc kubenswrapper[4907]: I0127 18:59:18.041947 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" exitCode=0 Jan 27 18:59:18 crc kubenswrapper[4907]: I0127 18:59:18.042545 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} Jan 27 18:59:19 crc kubenswrapper[4907]: I0127 18:59:19.057814 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerStarted","Data":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} Jan 27 18:59:19 crc kubenswrapper[4907]: I0127 18:59:19.098572 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhl9t" podStartSLOduration=3.624154463 podStartE2EDuration="7.098530955s" podCreationTimestamp="2026-01-27 18:59:12 +0000 UTC" firstStartedPulling="2026-01-27 18:59:15.003989425 +0000 UTC m=+3210.133272037" lastFinishedPulling="2026-01-27 18:59:18.478365917 +0000 UTC m=+3213.607648529" observedRunningTime="2026-01-27 18:59:19.082478912 +0000 UTC m=+3214.211761524" watchObservedRunningTime="2026-01-27 18:59:19.098530955 +0000 UTC m=+3214.227813557" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.252691 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.253472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.320067 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:23 crc kubenswrapper[4907]: I0127 18:59:23.748683 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:23 crc kubenswrapper[4907]: E0127 18:59:23.749055 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:24 crc kubenswrapper[4907]: I0127 18:59:24.161972 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:24 crc kubenswrapper[4907]: I0127 18:59:24.233608 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.130546 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhl9t" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" containerID="cri-o://ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" gracePeriod=2 Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.661927 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.670787 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.670873 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671217 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") pod \"df2b1f50-883c-4fe6-8134-b228bae26b02\" (UID: \"df2b1f50-883c-4fe6-8134-b228bae26b02\") " Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671729 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities" (OuterVolumeSpecName: "utilities") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.671950 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.677114 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p" (OuterVolumeSpecName: "kube-api-access-gf95p") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "kube-api-access-gf95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:26 crc kubenswrapper[4907]: I0127 18:59:26.774463 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf95p\" (UniqueName: \"kubernetes.io/projected/df2b1f50-883c-4fe6-8134-b228bae26b02-kube-api-access-gf95p\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143066 4907 generic.go:334] "Generic (PLEG): container finished" podID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" exitCode=0 Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143122 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143151 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhl9t" event={"ID":"df2b1f50-883c-4fe6-8134-b228bae26b02","Type":"ContainerDied","Data":"ee9542baa1193d646b759be98ff4b2d6b65ff5b9881fddc76195273d466fd5bd"} Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143168 4907 scope.go:117] "RemoveContainer" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.143178 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhl9t" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.168112 4907 scope.go:117] "RemoveContainer" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.190545 4907 scope.go:117] "RemoveContainer" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.253517 4907 scope.go:117] "RemoveContainer" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.254095 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": container with ID starting with ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21 not found: ID does not exist" containerID="ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254130 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21"} err="failed to get container status \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": rpc error: code = NotFound desc = could not find container \"ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21\": container with ID starting with ce05b069816dae6ac1025c9609a4c8bc3ca06c0aa1756291e180b7facde0eb21 not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254150 4907 scope.go:117] "RemoveContainer" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.254757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": container with ID starting with dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d not found: ID does not exist" containerID="dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254780 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d"} err="failed to get container status \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": rpc error: code = NotFound desc = could not find container \"dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d\": container with ID starting with dc1a5917421578b5f7ef33d275e047981ddf386f5429c8e16ab524676405025d not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.254794 4907 scope.go:117] "RemoveContainer" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: E0127 18:59:27.255166 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": container with ID starting with f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478 not found: ID does not exist" containerID="f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.255220 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478"} err="failed to get container status \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": rpc error: code = NotFound desc = could not find container \"f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478\": container with ID starting with f4ee7ed9a4cd2d601d326653a61a2e5e9f5297cbd97022c956f2de5cab276478 not found: ID does not exist" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.287754 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df2b1f50-883c-4fe6-8134-b228bae26b02" (UID: "df2b1f50-883c-4fe6-8134-b228bae26b02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.290918 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2b1f50-883c-4fe6-8134-b228bae26b02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.485684 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.500736 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhl9t"] Jan 27 18:59:27 crc kubenswrapper[4907]: I0127 18:59:27.762778 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" path="/var/lib/kubelet/pods/df2b1f50-883c-4fe6-8134-b228bae26b02/volumes" Jan 27 18:59:35 crc kubenswrapper[4907]: I0127 18:59:35.756668 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:35 crc kubenswrapper[4907]: E0127 18:59:35.757519 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:46 crc kubenswrapper[4907]: I0127 18:59:46.749300 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:46 crc kubenswrapper[4907]: E0127 18:59:46.750111 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 18:59:59 crc kubenswrapper[4907]: I0127 18:59:59.747947 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 18:59:59 crc kubenswrapper[4907]: E0127 18:59:59.749874 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164041 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164911 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164929 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164940 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164948 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[4907]: E0127 19:00:00.164973 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.164979 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.165244 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2b1f50-883c-4fe6-8134-b228bae26b02" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.166103 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.168322 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.168934 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.179678 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.247685 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.248088 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.248149 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350083 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350329 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.350360 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.351044 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.356680 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.369247 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"collect-profiles-29492340-9z9c6\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:00 crc kubenswrapper[4907]: I0127 19:00:00.495929 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.003503 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.588531 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerStarted","Data":"730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08"} Jan 27 19:00:01 crc kubenswrapper[4907]: I0127 19:00:01.589004 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerStarted","Data":"969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8"} Jan 27 19:00:02 crc kubenswrapper[4907]: I0127 19:00:02.612719 4907 generic.go:334] "Generic (PLEG): container finished" podID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerID="730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[4907]: I0127 19:00:02.612765 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerDied","Data":"730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08"} Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.049141 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.139814 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140184 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140240 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") pod \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\" (UID: \"2b8f8a81-de05-4458-b8bc-4031caa5a02c\") " Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.140866 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.141246 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8f8a81-de05-4458-b8bc-4031caa5a02c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.145845 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.145862 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f" (OuterVolumeSpecName: "kube-api-access-c8l5f") pod "2b8f8a81-de05-4458-b8bc-4031caa5a02c" (UID: "2b8f8a81-de05-4458-b8bc-4031caa5a02c"). InnerVolumeSpecName "kube-api-access-c8l5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.243364 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b8f8a81-de05-4458-b8bc-4031caa5a02c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.243406 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8l5f\" (UniqueName: \"kubernetes.io/projected/2b8f8a81-de05-4458-b8bc-4031caa5a02c-kube-api-access-c8l5f\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.625773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" event={"ID":"2b8f8a81-de05-4458-b8bc-4031caa5a02c","Type":"ContainerDied","Data":"969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8"} Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.626097 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="969993e303f3b9bbbb118f37f88ab62a2d09090b037b2ac6d99ae74f9847c6d8" Jan 27 19:00:03 crc kubenswrapper[4907]: I0127 19:00:03.625841 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6" Jan 27 19:00:04 crc kubenswrapper[4907]: I0127 19:00:04.123922 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 19:00:04 crc kubenswrapper[4907]: I0127 19:00:04.134522 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-hbgsp"] Jan 27 19:00:05 crc kubenswrapper[4907]: I0127 19:00:05.779107 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98eb00a2-9da3-459d-b011-7d92bcd6ed21" path="/var/lib/kubelet/pods/98eb00a2-9da3-459d-b011-7d92bcd6ed21/volumes" Jan 27 19:00:11 crc kubenswrapper[4907]: I0127 19:00:11.748884 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:00:11 crc kubenswrapper[4907]: E0127 19:00:11.750090 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:00:20 crc kubenswrapper[4907]: I0127 19:00:20.811203 4907 generic.go:334] "Generic (PLEG): container finished" podID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerID="de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23" exitCode=0 Jan 27 19:00:20 crc kubenswrapper[4907]: I0127 19:00:20.811255 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerDied","Data":"de934e2dc327a81279af07eb6d91c89ddf59b78184be61a32bd24388b1e4eb23"} Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.382358 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561680 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561794 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561827 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561936 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.561969 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.562023 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.562069 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") pod \"36c00f4a-e4e0-472b-a51c-510d44296cf8\" (UID: \"36c00f4a-e4e0-472b-a51c-510d44296cf8\") " Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.567882 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.568102 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8" (OuterVolumeSpecName: "kube-api-access-7pfg8") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "kube-api-access-7pfg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.595136 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory" (OuterVolumeSpecName: "inventory") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.595413 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.596314 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.600251 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.601865 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "36c00f4a-e4e0-472b-a51c-510d44296cf8" (UID: "36c00f4a-e4e0-472b-a51c-510d44296cf8"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666335 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pfg8\" (UniqueName: \"kubernetes.io/projected/36c00f4a-e4e0-472b-a51c-510d44296cf8-kube-api-access-7pfg8\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666380 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666394 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666408 4907 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666427 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666440 4907 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.666453 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36c00f4a-e4e0-472b-a51c-510d44296cf8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834402 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" event={"ID":"36c00f4a-e4e0-472b-a51c-510d44296cf8","Type":"ContainerDied","Data":"31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390"} Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834451 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c4ad01a3355522b9c211e1d04e1216baf4a2feec06bd5c5cb46ad3bd3e8390" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.834456 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940288 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:22 crc kubenswrapper[4907]: E0127 19:00:22.940863 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940885 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: E0127 19:00:22.940912 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.940919 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.941158 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c00f4a-e4e0-472b-a51c-510d44296cf8" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.941180 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" containerName="collect-profiles" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.942037 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945114 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945361 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945480 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945618 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9gxdz" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.945815 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 27 19:00:22 crc kubenswrapper[4907]: I0127 19:00:22.954358 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077408 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077503 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.077696 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.078105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.078167 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181114 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181473 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181615 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181710 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.181851 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.186833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.187375 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.187374 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.188244 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.202022 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k5dbc\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.277751 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:23 crc kubenswrapper[4907]: I0127 19:00:23.848877 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc"] Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.864065 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerStarted","Data":"3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750"} Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.864788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerStarted","Data":"ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8"} Jan 27 19:00:24 crc kubenswrapper[4907]: I0127 19:00:24.891742 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" podStartSLOduration=2.459446189 podStartE2EDuration="2.891716926s" podCreationTimestamp="2026-01-27 19:00:22 +0000 UTC" firstStartedPulling="2026-01-27 19:00:23.850586539 +0000 UTC m=+3278.979869151" lastFinishedPulling="2026-01-27 19:00:24.282857276 +0000 UTC m=+3279.412139888" observedRunningTime="2026-01-27 19:00:24.879542762 +0000 UTC m=+3280.008825364" watchObservedRunningTime="2026-01-27 19:00:24.891716926 +0000 UTC m=+3280.020999538" Jan 27 19:00:26 crc kubenswrapper[4907]: I0127 19:00:26.748277 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:00:27 crc kubenswrapper[4907]: I0127 19:00:27.901315 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} Jan 27 19:00:42 crc kubenswrapper[4907]: I0127 19:00:42.061637 4907 generic.go:334] "Generic (PLEG): container finished" podID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerID="3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750" exitCode=0 Jan 27 19:00:42 crc kubenswrapper[4907]: I0127 19:00:42.061719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerDied","Data":"3c7db8bf98847300f59678b48db417de088b5f4aeca9a7730e7ab696c96af750"} Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.637834 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801405 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801598 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801840 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801893 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.801963 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") pod \"cd8ce37e-984e-48a7-afcf-98798042a1c4\" (UID: \"cd8ce37e-984e-48a7-afcf-98798042a1c4\") " Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.807979 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt" (OuterVolumeSpecName: "kube-api-access-b5lpt") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "kube-api-access-b5lpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.838108 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.838634 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.839210 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.840514 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory" (OuterVolumeSpecName: "inventory") pod "cd8ce37e-984e-48a7-afcf-98798042a1c4" (UID: "cd8ce37e-984e-48a7-afcf-98798042a1c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905030 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905073 4907 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905088 4907 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905104 4907 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8ce37e-984e-48a7-afcf-98798042a1c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:43 crc kubenswrapper[4907]: I0127 19:00:43.905118 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lpt\" (UniqueName: \"kubernetes.io/projected/cd8ce37e-984e-48a7-afcf-98798042a1c4-kube-api-access-b5lpt\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" event={"ID":"cd8ce37e-984e-48a7-afcf-98798042a1c4","Type":"ContainerDied","Data":"ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8"} Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085250 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb7e691bdeb9c59faa365527a1e632eeb69022610c7d31919ed0fbb51ac8ea8" Jan 27 19:00:44 crc kubenswrapper[4907]: I0127 19:00:44.085549 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k5dbc" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.160573 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:00 crc kubenswrapper[4907]: E0127 19:01:00.161758 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.161778 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.162103 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8ce37e-984e-48a7-afcf-98798042a1c4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.163136 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.173758 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325387 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325444 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325476 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.325687 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428214 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428260 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.428338 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.434462 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.434461 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.440547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.450726 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"keystone-cron-29492341-l97qr\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.505914 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.666173 4907 scope.go:117] "RemoveContainer" containerID="18df1497634165c04863e96f7f6daec0a2367654ea826c8f22afef5c3b441191" Jan 27 19:01:00 crc kubenswrapper[4907]: I0127 19:01:00.989088 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-l97qr"] Jan 27 19:01:01 crc kubenswrapper[4907]: I0127 19:01:01.275519 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerStarted","Data":"bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553"} Jan 27 19:01:02 crc kubenswrapper[4907]: I0127 19:01:02.286686 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerStarted","Data":"43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064"} Jan 27 19:01:02 crc kubenswrapper[4907]: I0127 19:01:02.305310 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492341-l97qr" podStartSLOduration=2.305295909 podStartE2EDuration="2.305295909s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:02.299179038 +0000 UTC m=+3317.428461650" watchObservedRunningTime="2026-01-27 19:01:02.305295909 +0000 UTC m=+3317.434578521" Jan 27 19:01:06 crc kubenswrapper[4907]: I0127 19:01:06.330934 4907 generic.go:334] "Generic (PLEG): container finished" podID="6e412045-8e45-4718-98e5-17e76c69623a" containerID="43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064" exitCode=0 Jan 27 19:01:06 crc kubenswrapper[4907]: I0127 19:01:06.331052 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerDied","Data":"43901a3b4379bb236fe8c34fd5923c098a8bbb69bb883b3b1a55a5d27e825064"} Jan 27 19:01:07 crc kubenswrapper[4907]: I0127 19:01:07.902172 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027541 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027917 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.027947 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") pod \"6e412045-8e45-4718-98e5-17e76c69623a\" (UID: \"6e412045-8e45-4718-98e5-17e76c69623a\") " Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.033824 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv" (OuterVolumeSpecName: "kube-api-access-mb9bv") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "kube-api-access-mb9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.036744 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.070077 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.100684 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data" (OuterVolumeSpecName: "config-data") pod "6e412045-8e45-4718-98e5-17e76c69623a" (UID: "6e412045-8e45-4718-98e5-17e76c69623a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131100 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9bv\" (UniqueName: \"kubernetes.io/projected/6e412045-8e45-4718-98e5-17e76c69623a-kube-api-access-mb9bv\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131153 4907 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131166 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.131180 4907 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e412045-8e45-4718-98e5-17e76c69623a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354432 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-l97qr" event={"ID":"6e412045-8e45-4718-98e5-17e76c69623a","Type":"ContainerDied","Data":"bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553"} Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354482 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbdd02deab8c0a2ccad14373d3258f0611a0263009c3981dc68ab4c697b01553" Jan 27 19:01:08 crc kubenswrapper[4907]: I0127 19:01:08.354485 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-l97qr" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.199197 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:25 crc kubenswrapper[4907]: E0127 19:01:25.201671 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.201790 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.202126 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e412045-8e45-4718-98e5-17e76c69623a" containerName="keystone-cron" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.204485 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.214000 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282329 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282428 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.282605 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384697 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384780 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.384905 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.385542 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.385808 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.405610 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"community-operators-wxb6x\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:25 crc kubenswrapper[4907]: I0127 19:01:25.529660 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.187526 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603162 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" exitCode=0 Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603228 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863"} Jan 27 19:01:26 crc kubenswrapper[4907]: I0127 19:01:26.603452 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"17e40b23eac574b1fad07b7158cb8bc7797422af7d2de32e35079705e0b6e787"} Jan 27 19:01:28 crc kubenswrapper[4907]: I0127 19:01:28.634411 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} Jan 27 19:01:30 crc kubenswrapper[4907]: I0127 19:01:30.659103 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" exitCode=0 Jan 27 19:01:30 crc kubenswrapper[4907]: I0127 19:01:30.659219 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} Jan 27 19:01:35 crc kubenswrapper[4907]: I0127 19:01:35.723138 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerStarted","Data":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} Jan 27 19:01:35 crc kubenswrapper[4907]: I0127 19:01:35.746791 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxb6x" podStartSLOduration=2.053541694 podStartE2EDuration="10.746766205s" podCreationTimestamp="2026-01-27 19:01:25 +0000 UTC" firstStartedPulling="2026-01-27 19:01:26.605197877 +0000 UTC m=+3341.734480489" lastFinishedPulling="2026-01-27 19:01:35.298422388 +0000 UTC m=+3350.427705000" observedRunningTime="2026-01-27 19:01:35.742291849 +0000 UTC m=+3350.871574481" watchObservedRunningTime="2026-01-27 19:01:35.746766205 +0000 UTC m=+3350.876048827" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.530407 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.531215 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.618658 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.906571 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:45 crc kubenswrapper[4907]: I0127 19:01:45.972261 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:47 crc kubenswrapper[4907]: I0127 19:01:47.866014 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxb6x" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" containerID="cri-o://3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" gracePeriod=2 Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.490088 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615674 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615911 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.615938 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") pod \"019ce7ce-e01c-4708-a3ea-4e4139158064\" (UID: \"019ce7ce-e01c-4708-a3ea-4e4139158064\") " Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.618011 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities" (OuterVolumeSpecName: "utilities") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.623903 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652" (OuterVolumeSpecName: "kube-api-access-5z652") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "kube-api-access-5z652". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.688344 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "019ce7ce-e01c-4708-a3ea-4e4139158064" (UID: "019ce7ce-e01c-4708-a3ea-4e4139158064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719288 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z652\" (UniqueName: \"kubernetes.io/projected/019ce7ce-e01c-4708-a3ea-4e4139158064-kube-api-access-5z652\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719329 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.719342 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/019ce7ce-e01c-4708-a3ea-4e4139158064-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885798 4907 generic.go:334] "Generic (PLEG): container finished" podID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" exitCode=0 Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885844 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb6x" event={"ID":"019ce7ce-e01c-4708-a3ea-4e4139158064","Type":"ContainerDied","Data":"17e40b23eac574b1fad07b7158cb8bc7797422af7d2de32e35079705e0b6e787"} Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.885891 4907 scope.go:117] "RemoveContainer" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.886082 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb6x" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.928615 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.929647 4907 scope.go:117] "RemoveContainer" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.941490 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxb6x"] Jan 27 19:01:48 crc kubenswrapper[4907]: I0127 19:01:48.957108 4907 scope.go:117] "RemoveContainer" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009194 4907 scope.go:117] "RemoveContainer" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.009720 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": container with ID starting with 3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c not found: ID does not exist" containerID="3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009751 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c"} err="failed to get container status \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": rpc error: code = NotFound desc = could not find container \"3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c\": container with ID starting with 3fbd611dd937e81805fb7b2b149b0151c921581bfe50cc69cea508aada5b540c not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.009771 4907 scope.go:117] "RemoveContainer" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.010117 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": container with ID starting with a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1 not found: ID does not exist" containerID="a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010166 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1"} err="failed to get container status \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": rpc error: code = NotFound desc = could not find container \"a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1\": container with ID starting with a7d2069363915a856d77a2c3b80350f9625a779f268a14c57ab29a7f9af689a1 not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010199 4907 scope.go:117] "RemoveContainer" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: E0127 19:01:49.010492 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": container with ID starting with ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863 not found: ID does not exist" containerID="ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.010524 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863"} err="failed to get container status \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": rpc error: code = NotFound desc = could not find container \"ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863\": container with ID starting with ae277e72e1b29c9d6de099985c143ae0deaedb192cea85abcb4b68364d325863 not found: ID does not exist" Jan 27 19:01:49 crc kubenswrapper[4907]: I0127 19:01:49.768354 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" path="/var/lib/kubelet/pods/019ce7ce-e01c-4708-a3ea-4e4139158064/volumes" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.146102 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147348 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-utilities" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147366 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-utilities" Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147387 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147394 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: E0127 19:02:34.147437 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-content" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147445 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="extract-content" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.147686 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="019ce7ce-e01c-4708-a3ea-4e4139158064" containerName="registry-server" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.149832 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.172938 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289105 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289188 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.289475 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392321 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392580 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.392929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.393195 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.412756 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"redhat-operators-dqk6q\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:34 crc kubenswrapper[4907]: I0127 19:02:34.473039 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:35 crc kubenswrapper[4907]: I0127 19:02:35.027073 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:02:35 crc kubenswrapper[4907]: I0127 19:02:35.427773 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"4b7bada88b753e16b48cbb3e94d46d3b57b2e824e7653126a28f79fe70c1f648"} Jan 27 19:02:36 crc kubenswrapper[4907]: I0127 19:02:36.456116 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" exitCode=0 Jan 27 19:02:36 crc kubenswrapper[4907]: I0127 19:02:36.456170 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210"} Jan 27 19:02:38 crc kubenswrapper[4907]: I0127 19:02:38.479063 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.785516 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.789900 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.799971 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.835884 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.837518 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.837684 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939556 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939744 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.939899 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.940228 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.940436 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:46 crc kubenswrapper[4907]: I0127 19:02:46.966030 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"redhat-marketplace-whhb5\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.108540 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.604822 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" exitCode=0 Jan 27 19:02:47 crc kubenswrapper[4907]: I0127 19:02:47.605134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.086665 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.616678 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" exitCode=0 Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.616785 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.617295 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"3d91b0c7e2dd6ccad025cfb3de7c34c804667f8241cb40d44be97b416557728b"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.623842 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerStarted","Data":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} Jan 27 19:02:48 crc kubenswrapper[4907]: I0127 19:02:48.672668 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqk6q" podStartSLOduration=3.070450829 podStartE2EDuration="14.672649436s" podCreationTimestamp="2026-01-27 19:02:34 +0000 UTC" firstStartedPulling="2026-01-27 19:02:36.460589998 +0000 UTC m=+3411.589872610" lastFinishedPulling="2026-01-27 19:02:48.062788585 +0000 UTC m=+3423.192071217" observedRunningTime="2026-01-27 19:02:48.658348385 +0000 UTC m=+3423.787630987" watchObservedRunningTime="2026-01-27 19:02:48.672649436 +0000 UTC m=+3423.801932048" Jan 27 19:02:50 crc kubenswrapper[4907]: I0127 19:02:50.647975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} Jan 27 19:02:51 crc kubenswrapper[4907]: I0127 19:02:51.664460 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" exitCode=0 Jan 27 19:02:51 crc kubenswrapper[4907]: I0127 19:02:51.664664 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} Jan 27 19:02:52 crc kubenswrapper[4907]: I0127 19:02:52.680908 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerStarted","Data":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} Jan 27 19:02:52 crc kubenswrapper[4907]: I0127 19:02:52.704968 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whhb5" podStartSLOduration=3.177323084 podStartE2EDuration="6.704946698s" podCreationTimestamp="2026-01-27 19:02:46 +0000 UTC" firstStartedPulling="2026-01-27 19:02:48.61900561 +0000 UTC m=+3423.748288222" lastFinishedPulling="2026-01-27 19:02:52.146629224 +0000 UTC m=+3427.275911836" observedRunningTime="2026-01-27 19:02:52.699599828 +0000 UTC m=+3427.828882440" watchObservedRunningTime="2026-01-27 19:02:52.704946698 +0000 UTC m=+3427.834229310" Jan 27 19:02:54 crc kubenswrapper[4907]: I0127 19:02:54.474736 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:54 crc kubenswrapper[4907]: I0127 19:02:54.475097 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:02:55 crc kubenswrapper[4907]: I0127 19:02:55.520957 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqk6q" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:02:55 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:02:55 crc kubenswrapper[4907]: > Jan 27 19:02:56 crc kubenswrapper[4907]: I0127 19:02:56.521699 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:02:56 crc kubenswrapper[4907]: I0127 19:02:56.521773 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.109306 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.109397 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.164047 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.808521 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:02:57 crc kubenswrapper[4907]: I0127 19:02:57.865675 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:02:59 crc kubenswrapper[4907]: I0127 19:02:59.763377 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whhb5" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" containerID="cri-o://ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" gracePeriod=2 Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.330758 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.476606 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477122 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") pod \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\" (UID: \"35a463a4-55b4-4c04-a4f4-ad5b13691a68\") " Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.477728 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities" (OuterVolumeSpecName: "utilities") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.478357 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.483680 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj" (OuterVolumeSpecName: "kube-api-access-xh7qj") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "kube-api-access-xh7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.505765 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a463a4-55b4-4c04-a4f4-ad5b13691a68" (UID: "35a463a4-55b4-4c04-a4f4-ad5b13691a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.580338 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7qj\" (UniqueName: \"kubernetes.io/projected/35a463a4-55b4-4c04-a4f4-ad5b13691a68-kube-api-access-xh7qj\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.580375 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a463a4-55b4-4c04-a4f4-ad5b13691a68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776411 4907 generic.go:334] "Generic (PLEG): container finished" podID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" exitCode=0 Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776462 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776520 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whhb5" event={"ID":"35a463a4-55b4-4c04-a4f4-ad5b13691a68","Type":"ContainerDied","Data":"3d91b0c7e2dd6ccad025cfb3de7c34c804667f8241cb40d44be97b416557728b"} Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776536 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whhb5" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.776545 4907 scope.go:117] "RemoveContainer" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.812442 4907 scope.go:117] "RemoveContainer" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.823402 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.835916 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whhb5"] Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.839685 4907 scope.go:117] "RemoveContainer" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.911687 4907 scope.go:117] "RemoveContainer" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912090 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": container with ID starting with ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490 not found: ID does not exist" containerID="ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912123 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490"} err="failed to get container status \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": rpc error: code = NotFound desc = could not find container \"ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490\": container with ID starting with ef6748420f02070d5bd6d371b9cffa3eb6d5a87addf17d3e5f36fe46f57ae490 not found: ID does not exist" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912149 4907 scope.go:117] "RemoveContainer" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912388 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": container with ID starting with 90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005 not found: ID does not exist" containerID="90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912416 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005"} err="failed to get container status \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": rpc error: code = NotFound desc = could not find container \"90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005\": container with ID starting with 90e926c140a9db2db2a01730c463a2b95b527d9b204faf0553dc58fb03415005 not found: ID does not exist" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912431 4907 scope.go:117] "RemoveContainer" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: E0127 19:03:00.912668 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": container with ID starting with 179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a not found: ID does not exist" containerID="179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a" Jan 27 19:03:00 crc kubenswrapper[4907]: I0127 19:03:00.912692 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a"} err="failed to get container status \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": rpc error: code = NotFound desc = could not find container \"179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a\": container with ID starting with 179fc46d44622fcbaef3bcb27c7b2d9dd398779fdc15170c96e39434fc4f013a not found: ID does not exist" Jan 27 19:03:01 crc kubenswrapper[4907]: I0127 19:03:01.762264 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" path="/var/lib/kubelet/pods/35a463a4-55b4-4c04-a4f4-ad5b13691a68/volumes" Jan 27 19:03:04 crc kubenswrapper[4907]: I0127 19:03:04.525531 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:04 crc kubenswrapper[4907]: I0127 19:03:04.576244 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:05 crc kubenswrapper[4907]: I0127 19:03:05.342462 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:05 crc kubenswrapper[4907]: I0127 19:03:05.842134 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqk6q" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" containerID="cri-o://6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" gracePeriod=2 Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.393953 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438271 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438494 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.438609 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") pod \"96cfd20e-3418-4124-ad85-e794d2fad77d\" (UID: \"96cfd20e-3418-4124-ad85-e794d2fad77d\") " Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.446221 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw" (OuterVolumeSpecName: "kube-api-access-z97xw") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "kube-api-access-z97xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.446725 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities" (OuterVolumeSpecName: "utilities") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.580186 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.580229 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97xw\" (UniqueName: \"kubernetes.io/projected/96cfd20e-3418-4124-ad85-e794d2fad77d-kube-api-access-z97xw\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.709029 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96cfd20e-3418-4124-ad85-e794d2fad77d" (UID: "96cfd20e-3418-4124-ad85-e794d2fad77d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.787680 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96cfd20e-3418-4124-ad85-e794d2fad77d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856360 4907 generic.go:334] "Generic (PLEG): container finished" podID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" exitCode=0 Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856418 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqk6q" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856442 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856804 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqk6q" event={"ID":"96cfd20e-3418-4124-ad85-e794d2fad77d","Type":"ContainerDied","Data":"4b7bada88b753e16b48cbb3e94d46d3b57b2e824e7653126a28f79fe70c1f648"} Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.856821 4907 scope.go:117] "RemoveContainer" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.878765 4907 scope.go:117] "RemoveContainer" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.914544 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.930168 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqk6q"] Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.937165 4907 scope.go:117] "RemoveContainer" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974282 4907 scope.go:117] "RemoveContainer" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.974820 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": container with ID starting with 6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2 not found: ID does not exist" containerID="6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974879 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2"} err="failed to get container status \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": rpc error: code = NotFound desc = could not find container \"6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2\": container with ID starting with 6bc89177ca4b15c5de5162448d6584671af6948fe6cb69d2853c6e9c871529c2 not found: ID does not exist" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.974906 4907 scope.go:117] "RemoveContainer" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.975485 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": container with ID starting with 9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e not found: ID does not exist" containerID="9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.975536 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e"} err="failed to get container status \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": rpc error: code = NotFound desc = could not find container \"9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e\": container with ID starting with 9bedf268f9d5e51b3c94f5591a7aa929a17f6c0553dd2eec5321284acf2b816e not found: ID does not exist" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.975644 4907 scope.go:117] "RemoveContainer" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: E0127 19:03:06.976210 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": container with ID starting with 3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210 not found: ID does not exist" containerID="3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210" Jan 27 19:03:06 crc kubenswrapper[4907]: I0127 19:03:06.976245 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210"} err="failed to get container status \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": rpc error: code = NotFound desc = could not find container \"3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210\": container with ID starting with 3a472f93360ef6cc49545eb7c8a0e644cdd111fdbe5f4766f04624d869de6210 not found: ID does not exist" Jan 27 19:03:07 crc kubenswrapper[4907]: I0127 19:03:07.762455 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" path="/var/lib/kubelet/pods/96cfd20e-3418-4124-ad85-e794d2fad77d/volumes" Jan 27 19:03:26 crc kubenswrapper[4907]: I0127 19:03:26.522275 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:03:26 crc kubenswrapper[4907]: I0127 19:03:26.523105 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.521384 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.522441 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.522559 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.524133 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:03:56 crc kubenswrapper[4907]: I0127 19:03:56.524256 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" gracePeriod=600 Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415406 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" exitCode=0 Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94"} Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415857 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} Jan 27 19:03:57 crc kubenswrapper[4907]: I0127 19:03:57.415897 4907 scope.go:117] "RemoveContainer" containerID="97705a95a639040745de3671e9b2a28506a73d063810d79bbe209b67ccb31f95" Jan 27 19:05:56 crc kubenswrapper[4907]: I0127 19:05:56.536241 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:05:56 crc kubenswrapper[4907]: I0127 19:05:56.537223 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:26 crc kubenswrapper[4907]: I0127 19:06:26.521774 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:06:26 crc kubenswrapper[4907]: I0127 19:06:26.522441 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521294 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521711 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.521767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.522780 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:06:56 crc kubenswrapper[4907]: I0127 19:06:56.522846 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" gracePeriod=600 Jan 27 19:06:56 crc kubenswrapper[4907]: E0127 19:06:56.650834 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598264 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" exitCode=0 Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598349 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809"} Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.598700 4907 scope.go:117] "RemoveContainer" containerID="2807b33513fcc6ed6b52c3e953a67d080db4ce66b2d00fa0d2f298131fad7d94" Jan 27 19:06:57 crc kubenswrapper[4907]: I0127 19:06:57.599581 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:06:57 crc kubenswrapper[4907]: E0127 19:06:57.600016 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:07 crc kubenswrapper[4907]: I0127 19:07:07.753397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:07 crc kubenswrapper[4907]: E0127 19:07:07.754235 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:18 crc kubenswrapper[4907]: I0127 19:07:18.748451 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:18 crc kubenswrapper[4907]: E0127 19:07:18.749320 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:29 crc kubenswrapper[4907]: I0127 19:07:29.748381 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:29 crc kubenswrapper[4907]: E0127 19:07:29.749794 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:41 crc kubenswrapper[4907]: I0127 19:07:41.750492 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:41 crc kubenswrapper[4907]: E0127 19:07:41.751357 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:07:53 crc kubenswrapper[4907]: I0127 19:07:53.749352 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:07:53 crc kubenswrapper[4907]: E0127 19:07:53.750403 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:05 crc kubenswrapper[4907]: I0127 19:08:05.767777 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:05 crc kubenswrapper[4907]: E0127 19:08:05.768792 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:16 crc kubenswrapper[4907]: I0127 19:08:16.748488 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:16 crc kubenswrapper[4907]: E0127 19:08:16.749384 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:30 crc kubenswrapper[4907]: I0127 19:08:30.749245 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:30 crc kubenswrapper[4907]: E0127 19:08:30.750205 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:43 crc kubenswrapper[4907]: I0127 19:08:43.748835 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:43 crc kubenswrapper[4907]: E0127 19:08:43.749493 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:08:56 crc kubenswrapper[4907]: I0127 19:08:56.750029 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:08:56 crc kubenswrapper[4907]: E0127 19:08:56.751343 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:09 crc kubenswrapper[4907]: I0127 19:09:09.758983 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:09 crc kubenswrapper[4907]: E0127 19:09:09.760615 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:23 crc kubenswrapper[4907]: I0127 19:09:23.748197 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:23 crc kubenswrapper[4907]: E0127 19:09:23.749160 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:38 crc kubenswrapper[4907]: I0127 19:09:38.748798 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:38 crc kubenswrapper[4907]: E0127 19:09:38.749539 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:09:49 crc kubenswrapper[4907]: I0127 19:09:49.748826 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:09:49 crc kubenswrapper[4907]: E0127 19:09:49.750060 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:02 crc kubenswrapper[4907]: I0127 19:10:02.747870 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:02 crc kubenswrapper[4907]: E0127 19:10:02.748659 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:15 crc kubenswrapper[4907]: I0127 19:10:15.764092 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:15 crc kubenswrapper[4907]: E0127 19:10:15.765141 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:26 crc kubenswrapper[4907]: I0127 19:10:26.750357 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:26 crc kubenswrapper[4907]: E0127 19:10:26.751303 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:40 crc kubenswrapper[4907]: I0127 19:10:40.748404 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:40 crc kubenswrapper[4907]: E0127 19:10:40.750540 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:10:53 crc kubenswrapper[4907]: I0127 19:10:53.754873 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:10:53 crc kubenswrapper[4907]: E0127 19:10:53.755615 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:08 crc kubenswrapper[4907]: I0127 19:11:08.748397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:08 crc kubenswrapper[4907]: E0127 19:11:08.750916 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:19 crc kubenswrapper[4907]: I0127 19:11:19.757893 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:19 crc kubenswrapper[4907]: E0127 19:11:19.759385 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:31 crc kubenswrapper[4907]: I0127 19:11:31.749065 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:31 crc kubenswrapper[4907]: E0127 19:11:31.749935 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:46 crc kubenswrapper[4907]: I0127 19:11:46.748625 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:46 crc kubenswrapper[4907]: E0127 19:11:46.749419 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:11:58 crc kubenswrapper[4907]: I0127 19:11:58.749933 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:11:59 crc kubenswrapper[4907]: I0127 19:11:59.294231 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.950064 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.950970 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.950983 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951021 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951027 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-utilities" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951037 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951043 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951058 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951065 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951086 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951092 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="extract-content" Jan 27 19:12:33 crc kubenswrapper[4907]: E0127 19:12:33.951104 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951109 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951324 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a463a4-55b4-4c04-a4f4-ad5b13691a68" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.951338 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cfd20e-3418-4124-ad85-e794d2fad77d" containerName="registry-server" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.953566 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:33 crc kubenswrapper[4907]: I0127 19:12:33.967362 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138173 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138493 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.138611 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241437 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241509 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241665 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.241959 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.242038 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.268370 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"community-operators-v2bdb\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:34 crc kubenswrapper[4907]: I0127 19:12:34.283313 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:35 crc kubenswrapper[4907]: I0127 19:12:35.006210 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:35 crc kubenswrapper[4907]: W0127 19:12:35.553681 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9d40db_3b3f_4272_a686_47234c2aa239.slice/crio-1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed WatchSource:0}: Error finding container 1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed: Status 404 returned error can't find the container with id 1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed Jan 27 19:12:35 crc kubenswrapper[4907]: I0127 19:12:35.716341 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed"} Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.731060 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" exitCode=0 Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.731134 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5"} Jan 27 19:12:36 crc kubenswrapper[4907]: I0127 19:12:36.734050 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:12:38 crc kubenswrapper[4907]: I0127 19:12:38.764084 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} Jan 27 19:12:39 crc kubenswrapper[4907]: I0127 19:12:39.776891 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" exitCode=0 Jan 27 19:12:39 crc kubenswrapper[4907]: I0127 19:12:39.776945 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} Jan 27 19:12:40 crc kubenswrapper[4907]: I0127 19:12:40.788506 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerStarted","Data":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} Jan 27 19:12:40 crc kubenswrapper[4907]: I0127 19:12:40.814850 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2bdb" podStartSLOduration=4.355952375 podStartE2EDuration="7.814827694s" podCreationTimestamp="2026-01-27 19:12:33 +0000 UTC" firstStartedPulling="2026-01-27 19:12:36.733847107 +0000 UTC m=+4011.863129719" lastFinishedPulling="2026-01-27 19:12:40.192722426 +0000 UTC m=+4015.322005038" observedRunningTime="2026-01-27 19:12:40.80507285 +0000 UTC m=+4015.934355462" watchObservedRunningTime="2026-01-27 19:12:40.814827694 +0000 UTC m=+4015.944110306" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.283694 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.284301 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:44 crc kubenswrapper[4907]: I0127 19:12:44.343028 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.333948 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.389117 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:54 crc kubenswrapper[4907]: I0127 19:12:54.932455 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2bdb" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" containerID="cri-o://5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" gracePeriod=2 Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.675591 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.879785 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.879836 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.880141 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") pod \"3d9d40db-3b3f-4272-a686-47234c2aa239\" (UID: \"3d9d40db-3b3f-4272-a686-47234c2aa239\") " Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.882715 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities" (OuterVolumeSpecName: "utilities") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.887603 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj" (OuterVolumeSpecName: "kube-api-access-l99sj") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "kube-api-access-l99sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948537 4907 generic.go:334] "Generic (PLEG): container finished" podID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" exitCode=0 Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948649 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948736 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bdb" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948816 4907 scope.go:117] "RemoveContainer" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.948752 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bdb" event={"ID":"3d9d40db-3b3f-4272-a686-47234c2aa239","Type":"ContainerDied","Data":"1cfc6e4a83ed1b7dc7fcac6bb497ae517482ea37831a8d376fea24005f3aaeed"} Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.960646 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d9d40db-3b3f-4272-a686-47234c2aa239" (UID: "3d9d40db-3b3f-4272-a686-47234c2aa239"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.975363 4907 scope.go:117] "RemoveContainer" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984184 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984242 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99sj\" (UniqueName: \"kubernetes.io/projected/3d9d40db-3b3f-4272-a686-47234c2aa239-kube-api-access-l99sj\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:55 crc kubenswrapper[4907]: I0127 19:12:55.984257 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d40db-3b3f-4272-a686-47234c2aa239-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.005913 4907 scope.go:117] "RemoveContainer" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.056973 4907 scope.go:117] "RemoveContainer" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.071248 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": container with ID starting with 5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79 not found: ID does not exist" containerID="5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071322 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79"} err="failed to get container status \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": rpc error: code = NotFound desc = could not find container \"5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79\": container with ID starting with 5330926b1454e92172b59e2a77ccb731ccdb16203ebe24f5e8a4144d9b4a5c79 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071359 4907 scope.go:117] "RemoveContainer" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.071916 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": container with ID starting with 3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6 not found: ID does not exist" containerID="3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071955 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6"} err="failed to get container status \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": rpc error: code = NotFound desc = could not find container \"3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6\": container with ID starting with 3d4f5afb74c428fbef034ff1406a53a141cc3668fbb8cb7aa3ac6600e2d510f6 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.071982 4907 scope.go:117] "RemoveContainer" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: E0127 19:12:56.072306 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": container with ID starting with bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5 not found: ID does not exist" containerID="bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.072348 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5"} err="failed to get container status \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": rpc error: code = NotFound desc = could not find container \"bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5\": container with ID starting with bab1fbc6d9b7754568daae4d509f94665f3a437c26280b399e30846171f996f5 not found: ID does not exist" Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.303063 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:56 crc kubenswrapper[4907]: I0127 19:12:56.317844 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2bdb"] Jan 27 19:12:57 crc kubenswrapper[4907]: I0127 19:12:57.759402 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" path="/var/lib/kubelet/pods/3d9d40db-3b3f-4272-a686-47234c2aa239/volumes" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.618654 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640780 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-content" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640824 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-content" Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640849 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640855 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: E0127 19:13:50.640960 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-utilities" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.640967 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="extract-utilities" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.642064 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9d40db-3b3f-4272-a686-47234c2aa239" containerName="registry-server" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.645718 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.668419 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713361 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713764 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.713820 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816138 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816202 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816232 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.816978 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.817034 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:50 crc kubenswrapper[4907]: I0127 19:13:50.949636 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"redhat-operators-sv8q2\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:51 crc kubenswrapper[4907]: I0127 19:13:51.046586 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:13:51 crc kubenswrapper[4907]: I0127 19:13:51.597164 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542214 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" exitCode=0 Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542281 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4"} Jan 27 19:13:52 crc kubenswrapper[4907]: I0127 19:13:52.542617 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"c9951ffb29d61c93df43d7bd0f43abc0771ff6370be8ef692b11392d4241602c"} Jan 27 19:13:54 crc kubenswrapper[4907]: I0127 19:13:54.573032 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} Jan 27 19:13:59 crc kubenswrapper[4907]: I0127 19:13:59.715504 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" exitCode=0 Jan 27 19:13:59 crc kubenswrapper[4907]: I0127 19:13:59.716207 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} Jan 27 19:14:00 crc kubenswrapper[4907]: I0127 19:14:00.728242 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerStarted","Data":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} Jan 27 19:14:00 crc kubenswrapper[4907]: I0127 19:14:00.754347 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv8q2" podStartSLOduration=3.027383903 podStartE2EDuration="10.754324308s" podCreationTimestamp="2026-01-27 19:13:50 +0000 UTC" firstStartedPulling="2026-01-27 19:13:52.544509332 +0000 UTC m=+4087.673791944" lastFinishedPulling="2026-01-27 19:14:00.271449737 +0000 UTC m=+4095.400732349" observedRunningTime="2026-01-27 19:14:00.745380316 +0000 UTC m=+4095.874662938" watchObservedRunningTime="2026-01-27 19:14:00.754324308 +0000 UTC m=+4095.883606930" Jan 27 19:14:01 crc kubenswrapper[4907]: I0127 19:14:01.047835 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:01 crc kubenswrapper[4907]: I0127 19:14:01.048200 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:02 crc kubenswrapper[4907]: I0127 19:14:02.111254 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:02 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:02 crc kubenswrapper[4907]: > Jan 27 19:14:12 crc kubenswrapper[4907]: I0127 19:14:12.107314 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:12 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:12 crc kubenswrapper[4907]: > Jan 27 19:14:22 crc kubenswrapper[4907]: I0127 19:14:22.103098 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:22 crc kubenswrapper[4907]: > Jan 27 19:14:26 crc kubenswrapper[4907]: I0127 19:14:26.524130 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:14:26 crc kubenswrapper[4907]: I0127 19:14:26.524675 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.095908 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.155121 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:31 crc kubenswrapper[4907]: I0127 19:14:31.334757 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.081216 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv8q2" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" containerID="cri-o://5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" gracePeriod=2 Jan 27 19:14:33 crc kubenswrapper[4907]: E0127 19:14:33.276696 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef84ef6f_caca_4d3d_a89c_689d9183ee8d.slice/crio-5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.662473 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.736601 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.750931 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.750991 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") pod \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\" (UID: \"ef84ef6f-caca-4d3d-a89c-689d9183ee8d\") " Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.755384 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities" (OuterVolumeSpecName: "utilities") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.771948 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt" (OuterVolumeSpecName: "kube-api-access-gnlnt") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "kube-api-access-gnlnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.854779 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnlnt\" (UniqueName: \"kubernetes.io/projected/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-kube-api-access-gnlnt\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.854809 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.886196 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef84ef6f-caca-4d3d-a89c-689d9183ee8d" (UID: "ef84ef6f-caca-4d3d-a89c-689d9183ee8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:33 crc kubenswrapper[4907]: I0127 19:14:33.956824 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef84ef6f-caca-4d3d-a89c-689d9183ee8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.096057 4907 generic.go:334] "Generic (PLEG): container finished" podID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" exitCode=0 Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.096189 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8q2" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097172 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097425 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8q2" event={"ID":"ef84ef6f-caca-4d3d-a89c-689d9183ee8d","Type":"ContainerDied","Data":"c9951ffb29d61c93df43d7bd0f43abc0771ff6370be8ef692b11392d4241602c"} Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.097494 4907 scope.go:117] "RemoveContainer" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.129502 4907 scope.go:117] "RemoveContainer" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.151055 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.169471 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv8q2"] Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.567072 4907 scope.go:117] "RemoveContainer" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626252 4907 scope.go:117] "RemoveContainer" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.626794 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": container with ID starting with 5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec not found: ID does not exist" containerID="5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626845 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec"} err="failed to get container status \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": rpc error: code = NotFound desc = could not find container \"5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec\": container with ID starting with 5b8c39dcb7450df0f9fdcf3390d5e9f56be53cb9822ab7772d140f59b6b8f7ec not found: ID does not exist" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.626876 4907 scope.go:117] "RemoveContainer" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.627194 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": container with ID starting with 1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854 not found: ID does not exist" containerID="1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627218 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854"} err="failed to get container status \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": rpc error: code = NotFound desc = could not find container \"1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854\": container with ID starting with 1db99ee8761e2af824948fd049563397e8adedc2ae4f5f866da005a362019854 not found: ID does not exist" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627233 4907 scope.go:117] "RemoveContainer" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: E0127 19:14:34.627442 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": container with ID starting with 1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4 not found: ID does not exist" containerID="1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4" Jan 27 19:14:34 crc kubenswrapper[4907]: I0127 19:14:34.627463 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4"} err="failed to get container status \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": rpc error: code = NotFound desc = could not find container \"1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4\": container with ID starting with 1972ce9b986a53c4500cd6e27dba9a9d8d33834fa9147b3a4e101401c6c91ce4 not found: ID does not exist" Jan 27 19:14:35 crc kubenswrapper[4907]: I0127 19:14:35.763625 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" path="/var/lib/kubelet/pods/ef84ef6f-caca-4d3d-a89c-689d9183ee8d/volumes" Jan 27 19:14:56 crc kubenswrapper[4907]: I0127 19:14:56.521420 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:14:56 crc kubenswrapper[4907]: I0127 19:14:56.522039 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.201238 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202434 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202453 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202508 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202517 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4907]: E0127 19:15:00.202538 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202551 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.202896 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef84ef6f-caca-4d3d-a89c-689d9183ee8d" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.204048 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.206156 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.206392 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.229067 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307228 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307500 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.307669 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.409897 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.410006 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.410092 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.411877 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.423406 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.435075 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"collect-profiles-29492355-hpblb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:00 crc kubenswrapper[4907]: I0127 19:15:00.527102 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:01 crc kubenswrapper[4907]: I0127 19:15:01.599808 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb"] Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383523 4907 generic.go:334] "Generic (PLEG): container finished" podID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerID="88bf80890e776fa0c9d90ac4dc0c374ca04d5702e03cfe204b051512a22db9f4" exitCode=0 Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383593 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerDied","Data":"88bf80890e776fa0c9d90ac4dc0c374ca04d5702e03cfe204b051512a22db9f4"} Jan 27 19:15:02 crc kubenswrapper[4907]: I0127 19:15:02.383884 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerStarted","Data":"2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7"} Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.815960 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921296 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921572 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.921686 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") pod \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\" (UID: \"b2386ffb-533f-4e55-8e2c-b56b123db6cb\") " Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.924053 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.938043 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4907]: I0127 19:15:03.938170 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq" (OuterVolumeSpecName: "kube-api-access-ks9kq") pod "b2386ffb-533f-4e55-8e2c-b56b123db6cb" (UID: "b2386ffb-533f-4e55-8e2c-b56b123db6cb"). InnerVolumeSpecName "kube-api-access-ks9kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025638 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2386ffb-533f-4e55-8e2c-b56b123db6cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025693 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9kq\" (UniqueName: \"kubernetes.io/projected/b2386ffb-533f-4e55-8e2c-b56b123db6cb-kube-api-access-ks9kq\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.025706 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2386ffb-533f-4e55-8e2c-b56b123db6cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.407779 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" event={"ID":"b2386ffb-533f-4e55-8e2c-b56b123db6cb","Type":"ContainerDied","Data":"2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7"} Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.408175 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5d9b6ad1a69b1ab3e6ddf940674710d95a2dde6403d7e081c05553fc6e53c7" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.407874 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-hpblb" Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.899805 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 19:15:04 crc kubenswrapper[4907]: I0127 19:15:04.911300 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-h8pgc"] Jan 27 19:15:05 crc kubenswrapper[4907]: I0127 19:15:05.766071 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18727dc-e815-4722-bbce-4bfe5a8ee4f2" path="/var/lib/kubelet/pods/e18727dc-e815-4722-bbce-4bfe5a8ee4f2/volumes" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.064536 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:18 crc kubenswrapper[4907]: E0127 19:15:18.065943 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.065965 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.066381 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2386ffb-533f-4e55-8e2c-b56b123db6cb" containerName="collect-profiles" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.069638 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.086361 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155349 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155483 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.155591 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.258933 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259099 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259489 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.259930 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.260360 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.280435 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"certified-operators-jhlp6\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.404726 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:18 crc kubenswrapper[4907]: I0127 19:15:18.957970 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601439 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" exitCode=0 Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601494 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70"} Jan 27 19:15:19 crc kubenswrapper[4907]: I0127 19:15:19.601525 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"b5b7155a8a43c9ece4bd431e5222b2faef155ab6cd58f2badefa29db92f07f77"} Jan 27 19:15:20 crc kubenswrapper[4907]: I0127 19:15:20.614607 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.247808 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.250190 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.260309 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.338821 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.338953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.339022 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.441541 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442135 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442308 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442402 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.442712 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.482468 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"redhat-marketplace-crjt4\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:21 crc kubenswrapper[4907]: I0127 19:15:21.585127 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.188191 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.638194 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2" exitCode=0 Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.638314 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2"} Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.639447 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"6998dd2a5993f2277280b5abce6a94a092bf7b305f23f23ae12483299cadf9b0"} Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.645585 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" exitCode=0 Jan 27 19:15:22 crc kubenswrapper[4907]: I0127 19:15:22.645624 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.668852 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.671472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerStarted","Data":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} Jan 27 19:15:24 crc kubenswrapper[4907]: I0127 19:15:24.727274 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhlp6" podStartSLOduration=3.191903899 podStartE2EDuration="6.727255501s" podCreationTimestamp="2026-01-27 19:15:18 +0000 UTC" firstStartedPulling="2026-01-27 19:15:19.603529214 +0000 UTC m=+4174.732811836" lastFinishedPulling="2026-01-27 19:15:23.138880826 +0000 UTC m=+4178.268163438" observedRunningTime="2026-01-27 19:15:24.711480007 +0000 UTC m=+4179.840762619" watchObservedRunningTime="2026-01-27 19:15:24.727255501 +0000 UTC m=+4179.856538113" Jan 27 19:15:25 crc kubenswrapper[4907]: I0127 19:15:25.684304 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef" exitCode=0 Jan 27 19:15:25 crc kubenswrapper[4907]: I0127 19:15:25.684354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef"} Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.521796 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.522471 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.522697 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.523707 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:15:26 crc kubenswrapper[4907]: I0127 19:15:26.523871 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" gracePeriod=600 Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.710734 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerStarted","Data":"656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714228 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" exitCode=0 Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714366 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.714397 4907 scope.go:117] "RemoveContainer" containerID="ffe8c361b2b1c797fcc7de11319f0c178c2805313080e7d374862a4730118809" Jan 27 19:15:27 crc kubenswrapper[4907]: I0127 19:15:27.733287 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crjt4" podStartSLOduration=2.587407649 podStartE2EDuration="6.733270774s" podCreationTimestamp="2026-01-27 19:15:21 +0000 UTC" firstStartedPulling="2026-01-27 19:15:22.64020213 +0000 UTC m=+4177.769484742" lastFinishedPulling="2026-01-27 19:15:26.786065245 +0000 UTC m=+4181.915347867" observedRunningTime="2026-01-27 19:15:27.73239749 +0000 UTC m=+4182.861680122" watchObservedRunningTime="2026-01-27 19:15:27.733270774 +0000 UTC m=+4182.862553386" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.492805 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.493319 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.566369 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:28 crc kubenswrapper[4907]: I0127 19:15:28.778722 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:30 crc kubenswrapper[4907]: I0127 19:15:30.649347 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:30 crc kubenswrapper[4907]: I0127 19:15:30.750027 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhlp6" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" containerID="cri-o://a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" gracePeriod=2 Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.352706 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.357993 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358238 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358277 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") pod \"4ef5aee7-bf46-43d8-9adb-55a7add33715\" (UID: \"4ef5aee7-bf46-43d8-9adb-55a7add33715\") " Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.358973 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities" (OuterVolumeSpecName: "utilities") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.365385 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf" (OuterVolumeSpecName: "kube-api-access-q4szf") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "kube-api-access-q4szf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.433735 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef5aee7-bf46-43d8-9adb-55a7add33715" (UID: "4ef5aee7-bf46-43d8-9adb-55a7add33715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460134 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460166 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef5aee7-bf46-43d8-9adb-55a7add33715-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.460177 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4szf\" (UniqueName: \"kubernetes.io/projected/4ef5aee7-bf46-43d8-9adb-55a7add33715-kube-api-access-q4szf\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.585591 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.585647 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.642099 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.789675 4907 generic.go:334] "Generic (PLEG): container finished" podID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" exitCode=0 Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.789926 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.790073 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhlp6" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.790845 4907 scope.go:117] "RemoveContainer" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.795662 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhlp6" event={"ID":"4ef5aee7-bf46-43d8-9adb-55a7add33715","Type":"ContainerDied","Data":"b5b7155a8a43c9ece4bd431e5222b2faef155ab6cd58f2badefa29db92f07f77"} Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.834329 4907 scope.go:117] "RemoveContainer" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.852848 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.863679 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.868102 4907 scope.go:117] "RemoveContainer" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.882287 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhlp6"] Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920347 4907 scope.go:117] "RemoveContainer" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.920784 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": container with ID starting with a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef not found: ID does not exist" containerID="a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920834 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef"} err="failed to get container status \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": rpc error: code = NotFound desc = could not find container \"a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef\": container with ID starting with a937d1c830251caea455b6143ecc4ee84a6e235ac1fe3d6789f4548d897a4eef not found: ID does not exist" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.920865 4907 scope.go:117] "RemoveContainer" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.921348 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": container with ID starting with 6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750 not found: ID does not exist" containerID="6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921378 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750"} err="failed to get container status \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": rpc error: code = NotFound desc = could not find container \"6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750\": container with ID starting with 6041efc647dd38f7278d0c93ebf5f7cd4ea186f5703f2983b17619664a85c750 not found: ID does not exist" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921396 4907 scope.go:117] "RemoveContainer" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: E0127 19:15:31.921745 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": container with ID starting with 081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70 not found: ID does not exist" containerID="081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70" Jan 27 19:15:31 crc kubenswrapper[4907]: I0127 19:15:31.921779 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70"} err="failed to get container status \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": rpc error: code = NotFound desc = could not find container \"081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70\": container with ID starting with 081e4ca12741028cf64167377168dbe404dfc522c1ad1cab136c9ea0c781ed70 not found: ID does not exist" Jan 27 19:15:33 crc kubenswrapper[4907]: I0127 19:15:33.761391 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" path="/var/lib/kubelet/pods/4ef5aee7-bf46-43d8-9adb-55a7add33715/volumes" Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.039245 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.039510 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crjt4" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" containerID="cri-o://656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" gracePeriod=2 Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.825520 4907 generic.go:334] "Generic (PLEG): container finished" podID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerID="656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" exitCode=0 Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.825622 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d"} Jan 27 19:15:34 crc kubenswrapper[4907]: I0127 19:15:34.980668 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166520 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.166627 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") pod \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\" (UID: \"c722dd63-6f6a-4a90-b8dc-f783ea762dee\") " Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.167904 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities" (OuterVolumeSpecName: "utilities") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.172917 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk" (OuterVolumeSpecName: "kube-api-access-gc6vk") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "kube-api-access-gc6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.187582 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c722dd63-6f6a-4a90-b8dc-f783ea762dee" (UID: "c722dd63-6f6a-4a90-b8dc-f783ea762dee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269501 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc6vk\" (UniqueName: \"kubernetes.io/projected/c722dd63-6f6a-4a90-b8dc-f783ea762dee-kube-api-access-gc6vk\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269542 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.269579 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c722dd63-6f6a-4a90-b8dc-f783ea762dee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840472 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crjt4" event={"ID":"c722dd63-6f6a-4a90-b8dc-f783ea762dee","Type":"ContainerDied","Data":"6998dd2a5993f2277280b5abce6a94a092bf7b305f23f23ae12483299cadf9b0"} Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840674 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crjt4" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.840866 4907 scope.go:117] "RemoveContainer" containerID="656037865c1f4fc2b46cd9c2335bbd5f25c76de5d2f90a26270cc8cf4052017d" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.870695 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.871135 4907 scope.go:117] "RemoveContainer" containerID="590176dec28398f976ed4ca07ba6662c5976d8e89834c861a9bec7a1b62acbef" Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.882176 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crjt4"] Jan 27 19:15:35 crc kubenswrapper[4907]: I0127 19:15:35.908486 4907 scope.go:117] "RemoveContainer" containerID="3dee0fe46738fb530860500418b634b5c7769eae19987b6473b771ddb3aa71b2" Jan 27 19:15:37 crc kubenswrapper[4907]: I0127 19:15:37.763469 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" path="/var/lib/kubelet/pods/c722dd63-6f6a-4a90-b8dc-f783ea762dee/volumes" Jan 27 19:16:01 crc kubenswrapper[4907]: I0127 19:16:01.218603 4907 scope.go:117] "RemoveContainer" containerID="ea20d869372e9205fd63ca951a287290bf5187b3c88cc6d4f04543aaf6c630c3" Jan 27 19:17:56 crc kubenswrapper[4907]: I0127 19:17:56.521336 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:17:56 crc kubenswrapper[4907]: I0127 19:17:56.522031 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:26 crc kubenswrapper[4907]: I0127 19:18:26.520907 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:26 crc kubenswrapper[4907]: I0127 19:18:26.521521 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:37 crc kubenswrapper[4907]: I0127 19:18:37.812728 4907 trace.go:236] Trace[1086698741]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (27-Jan-2026 19:18:36.649) (total time: 1162ms): Jan 27 19:18:37 crc kubenswrapper[4907]: Trace[1086698741]: [1.162355942s] [1.162355942s] END Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.521460 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.522120 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.522178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.523226 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:18:56 crc kubenswrapper[4907]: I0127 19:18:56.523291 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" gracePeriod=600 Jan 27 19:18:57 crc kubenswrapper[4907]: E0127 19:18:57.675270 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165847 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" exitCode=0 Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165888 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41"} Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.165931 4907 scope.go:117] "RemoveContainer" containerID="8138402587da7ef9ba4b0645d19833dc5fb3c20ffc6c4811bbcc443d5ec8c725" Jan 27 19:18:58 crc kubenswrapper[4907]: I0127 19:18:58.166748 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:18:58 crc kubenswrapper[4907]: E0127 19:18:58.167095 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:13 crc kubenswrapper[4907]: I0127 19:19:13.750733 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:13 crc kubenswrapper[4907]: E0127 19:19:13.751642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:27 crc kubenswrapper[4907]: I0127 19:19:27.751071 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:27 crc kubenswrapper[4907]: E0127 19:19:27.751878 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:39 crc kubenswrapper[4907]: I0127 19:19:39.748679 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:39 crc kubenswrapper[4907]: E0127 19:19:39.749595 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:19:53 crc kubenswrapper[4907]: I0127 19:19:53.749757 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:19:53 crc kubenswrapper[4907]: E0127 19:19:53.751416 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:06 crc kubenswrapper[4907]: I0127 19:20:06.748301 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:06 crc kubenswrapper[4907]: E0127 19:20:06.749177 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:19 crc kubenswrapper[4907]: I0127 19:20:19.748904 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:19 crc kubenswrapper[4907]: E0127 19:20:19.749923 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:30 crc kubenswrapper[4907]: I0127 19:20:30.748427 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:30 crc kubenswrapper[4907]: E0127 19:20:30.749361 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:43 crc kubenswrapper[4907]: I0127 19:20:43.748068 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:43 crc kubenswrapper[4907]: E0127 19:20:43.748863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:20:57 crc kubenswrapper[4907]: I0127 19:20:56.998752 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:20:57 crc kubenswrapper[4907]: E0127 19:20:57.072535 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:09 crc kubenswrapper[4907]: I0127 19:21:09.748160 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:09 crc kubenswrapper[4907]: E0127 19:21:09.749083 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:22 crc kubenswrapper[4907]: I0127 19:21:22.748310 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:22 crc kubenswrapper[4907]: E0127 19:21:22.749076 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:27 crc kubenswrapper[4907]: E0127 19:21:27.079024 4907 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.184:33290->38.102.83.184:45697: read tcp 38.102.83.184:33290->38.102.83.184:45697: read: connection reset by peer Jan 27 19:21:34 crc kubenswrapper[4907]: I0127 19:21:34.748516 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:34 crc kubenswrapper[4907]: E0127 19:21:34.749463 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:45 crc kubenswrapper[4907]: I0127 19:21:45.763523 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:45 crc kubenswrapper[4907]: E0127 19:21:45.765137 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:21:57 crc kubenswrapper[4907]: I0127 19:21:57.748753 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:21:57 crc kubenswrapper[4907]: E0127 19:21:57.750738 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:09 crc kubenswrapper[4907]: I0127 19:22:09.748962 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:09 crc kubenswrapper[4907]: E0127 19:22:09.749833 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:21 crc kubenswrapper[4907]: I0127 19:22:21.749330 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:21 crc kubenswrapper[4907]: E0127 19:22:21.750123 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:35 crc kubenswrapper[4907]: I0127 19:22:35.748548 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:35 crc kubenswrapper[4907]: E0127 19:22:35.750648 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.817886 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818868 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818880 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818890 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818898 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818914 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818920 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818938 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818944 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818960 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818967 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="extract-utilities" Jan 27 19:22:42 crc kubenswrapper[4907]: E0127 19:22:42.818992 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.818998 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="extract-content" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.819197 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef5aee7-bf46-43d8-9adb-55a7add33715" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.819225 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c722dd63-6f6a-4a90-b8dc-f783ea762dee" containerName="registry-server" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.820988 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.852527 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.983974 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.984055 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:42 crc kubenswrapper[4907]: I0127 19:22:42.984373 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087023 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087130 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087173 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087795 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.087864 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.108834 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"community-operators-xsfzv\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.146333 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:43 crc kubenswrapper[4907]: I0127 19:22:43.768055 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352054 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" exitCode=0 Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352357 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40"} Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.352384 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"f21220de24ed1f78011e22d3e61bccfd658536a3d27cca1de7576fd6efef89ec"} Jan 27 19:22:44 crc kubenswrapper[4907]: I0127 19:22:44.354739 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:22:45 crc kubenswrapper[4907]: I0127 19:22:45.364362 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} Jan 27 19:22:47 crc kubenswrapper[4907]: I0127 19:22:47.386353 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" exitCode=0 Jan 27 19:22:47 crc kubenswrapper[4907]: I0127 19:22:47.386661 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} Jan 27 19:22:48 crc kubenswrapper[4907]: I0127 19:22:48.399143 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerStarted","Data":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} Jan 27 19:22:48 crc kubenswrapper[4907]: I0127 19:22:48.428724 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsfzv" podStartSLOduration=2.943749236 podStartE2EDuration="6.42870355s" podCreationTimestamp="2026-01-27 19:22:42 +0000 UTC" firstStartedPulling="2026-01-27 19:22:44.354530333 +0000 UTC m=+4619.483812945" lastFinishedPulling="2026-01-27 19:22:47.839484657 +0000 UTC m=+4622.968767259" observedRunningTime="2026-01-27 19:22:48.417041471 +0000 UTC m=+4623.546324103" watchObservedRunningTime="2026-01-27 19:22:48.42870355 +0000 UTC m=+4623.557986162" Jan 27 19:22:49 crc kubenswrapper[4907]: I0127 19:22:49.748358 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:22:49 crc kubenswrapper[4907]: E0127 19:22:49.749137 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.146663 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.147183 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.210660 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.497970 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:53 crc kubenswrapper[4907]: I0127 19:22:53.545683 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:55 crc kubenswrapper[4907]: I0127 19:22:55.492125 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsfzv" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" containerID="cri-o://fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" gracePeriod=2 Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.011757 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.024949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.025058 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.025220 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") pod \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\" (UID: \"9db2604e-4039-4a0d-8bf9-f80c51d3df52\") " Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.026047 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities" (OuterVolumeSpecName: "utilities") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.032228 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk" (OuterVolumeSpecName: "kube-api-access-hqchk") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "kube-api-access-hqchk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.081265 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9db2604e-4039-4a0d-8bf9-f80c51d3df52" (UID: "9db2604e-4039-4a0d-8bf9-f80c51d3df52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127803 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127831 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9db2604e-4039-4a0d-8bf9-f80c51d3df52-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.127842 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqchk\" (UniqueName: \"kubernetes.io/projected/9db2604e-4039-4a0d-8bf9-f80c51d3df52-kube-api-access-hqchk\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510058 4907 generic.go:334] "Generic (PLEG): container finished" podID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" exitCode=0 Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510131 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510433 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsfzv" event={"ID":"9db2604e-4039-4a0d-8bf9-f80c51d3df52","Type":"ContainerDied","Data":"f21220de24ed1f78011e22d3e61bccfd658536a3d27cca1de7576fd6efef89ec"} Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510454 4907 scope.go:117] "RemoveContainer" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.510164 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsfzv" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.544768 4907 scope.go:117] "RemoveContainer" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.559564 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.570729 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsfzv"] Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.577540 4907 scope.go:117] "RemoveContainer" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.627453 4907 scope.go:117] "RemoveContainer" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628019 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": container with ID starting with fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9 not found: ID does not exist" containerID="fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628052 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9"} err="failed to get container status \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": rpc error: code = NotFound desc = could not find container \"fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9\": container with ID starting with fd815ad441d9442dfb6a20c92a10aaf935bf58e374c6bc3325f0e0b6a746a2d9 not found: ID does not exist" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628079 4907 scope.go:117] "RemoveContainer" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628421 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": container with ID starting with b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56 not found: ID does not exist" containerID="b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628448 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56"} err="failed to get container status \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": rpc error: code = NotFound desc = could not find container \"b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56\": container with ID starting with b48adfa7f23eb05d460e55b4bcff39087e30e8cdde1db8aa4fc98af65ee73e56 not found: ID does not exist" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628464 4907 scope.go:117] "RemoveContainer" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: E0127 19:22:56.628757 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": container with ID starting with a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40 not found: ID does not exist" containerID="a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40" Jan 27 19:22:56 crc kubenswrapper[4907]: I0127 19:22:56.628803 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40"} err="failed to get container status \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": rpc error: code = NotFound desc = could not find container \"a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40\": container with ID starting with a498f1cf6690ba9e1f5da0d6f91572beb398ab11f1684ec365cacfd8680e8a40 not found: ID does not exist" Jan 27 19:22:57 crc kubenswrapper[4907]: I0127 19:22:57.761973 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" path="/var/lib/kubelet/pods/9db2604e-4039-4a0d-8bf9-f80c51d3df52/volumes" Jan 27 19:23:01 crc kubenswrapper[4907]: I0127 19:23:01.748744 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:01 crc kubenswrapper[4907]: E0127 19:23:01.749588 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:16 crc kubenswrapper[4907]: I0127 19:23:16.748760 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:16 crc kubenswrapper[4907]: E0127 19:23:16.749827 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:27 crc kubenswrapper[4907]: I0127 19:23:27.748237 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:27 crc kubenswrapper[4907]: E0127 19:23:27.749284 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:39 crc kubenswrapper[4907]: I0127 19:23:39.748372 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:39 crc kubenswrapper[4907]: E0127 19:23:39.749327 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:23:52 crc kubenswrapper[4907]: I0127 19:23:52.748645 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:23:52 crc kubenswrapper[4907]: E0127 19:23:52.749539 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.274477 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275478 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-utilities" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275491 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-utilities" Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275516 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275522 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: E0127 19:24:01.275532 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-content" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275539 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="extract-content" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.275805 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db2604e-4039-4a0d-8bf9-f80c51d3df52" containerName="registry-server" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.277388 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.290117 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369190 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369258 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.369632 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472322 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472428 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472454 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.472916 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.473134 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.496873 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"redhat-operators-cggpc\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:01 crc kubenswrapper[4907]: I0127 19:24:01.610951 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:02 crc kubenswrapper[4907]: I0127 19:24:02.271178 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:03 crc kubenswrapper[4907]: I0127 19:24:03.307689 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"7a829611836c322ca4f4e5dd10cdaa680f206e6728d3a0c864acdd6c860546a9"} Jan 27 19:24:04 crc kubenswrapper[4907]: I0127 19:24:04.319206 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" exitCode=0 Jan 27 19:24:04 crc kubenswrapper[4907]: I0127 19:24:04.319306 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242"} Jan 27 19:24:05 crc kubenswrapper[4907]: I0127 19:24:05.332811 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} Jan 27 19:24:07 crc kubenswrapper[4907]: I0127 19:24:07.749420 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:24:08 crc kubenswrapper[4907]: I0127 19:24:08.366277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} Jan 27 19:24:13 crc kubenswrapper[4907]: I0127 19:24:13.423356 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" exitCode=0 Jan 27 19:24:13 crc kubenswrapper[4907]: I0127 19:24:13.423450 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} Jan 27 19:24:16 crc kubenswrapper[4907]: I0127 19:24:16.459544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerStarted","Data":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} Jan 27 19:24:16 crc kubenswrapper[4907]: I0127 19:24:16.488079 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cggpc" podStartSLOduration=4.883858886 podStartE2EDuration="15.488058063s" podCreationTimestamp="2026-01-27 19:24:01 +0000 UTC" firstStartedPulling="2026-01-27 19:24:04.321260847 +0000 UTC m=+4699.450543459" lastFinishedPulling="2026-01-27 19:24:14.925460024 +0000 UTC m=+4710.054742636" observedRunningTime="2026-01-27 19:24:16.477643109 +0000 UTC m=+4711.606925731" watchObservedRunningTime="2026-01-27 19:24:16.488058063 +0000 UTC m=+4711.617340675" Jan 27 19:24:21 crc kubenswrapper[4907]: I0127 19:24:21.611621 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:21 crc kubenswrapper[4907]: I0127 19:24:21.612276 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:22 crc kubenswrapper[4907]: I0127 19:24:22.670955 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:22 crc kubenswrapper[4907]: > Jan 27 19:24:32 crc kubenswrapper[4907]: I0127 19:24:32.665704 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:32 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:32 crc kubenswrapper[4907]: > Jan 27 19:24:42 crc kubenswrapper[4907]: I0127 19:24:42.669383 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:42 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:42 crc kubenswrapper[4907]: > Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.687425 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.762267 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:51 crc kubenswrapper[4907]: I0127 19:24:51.937317 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:52 crc kubenswrapper[4907]: I0127 19:24:52.911208 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cggpc" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" containerID="cri-o://24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" gracePeriod=2 Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.585740 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.690798 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.691091 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.691232 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") pod \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\" (UID: \"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589\") " Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.693008 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities" (OuterVolumeSpecName: "utilities") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.701570 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm" (OuterVolumeSpecName: "kube-api-access-jpszm") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "kube-api-access-jpszm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.795873 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpszm\" (UniqueName: \"kubernetes.io/projected/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-kube-api-access-jpszm\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.795905 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.817054 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" (UID: "26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.899291 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924746 4907 generic.go:334] "Generic (PLEG): container finished" podID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" exitCode=0 Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924804 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cggpc" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.924823 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.925150 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cggpc" event={"ID":"26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589","Type":"ContainerDied","Data":"7a829611836c322ca4f4e5dd10cdaa680f206e6728d3a0c864acdd6c860546a9"} Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.925166 4907 scope.go:117] "RemoveContainer" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.968892 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.978143 4907 scope.go:117] "RemoveContainer" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:53 crc kubenswrapper[4907]: I0127 19:24:53.979335 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cggpc"] Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.019575 4907 scope.go:117] "RemoveContainer" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059166 4907 scope.go:117] "RemoveContainer" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.059873 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": container with ID starting with 24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5 not found: ID does not exist" containerID="24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059933 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5"} err="failed to get container status \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": rpc error: code = NotFound desc = could not find container \"24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5\": container with ID starting with 24e873e3009a29abfccff3de7e3b6d4ab78155cf49dd03a36bcd8f6a8226eba5 not found: ID does not exist" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.059995 4907 scope.go:117] "RemoveContainer" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.060407 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": container with ID starting with daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6 not found: ID does not exist" containerID="daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060450 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6"} err="failed to get container status \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": rpc error: code = NotFound desc = could not find container \"daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6\": container with ID starting with daf5f06c64a76fc6e9a93b4699f73bdd19e65e9c6f6b71059737d613b74d5de6 not found: ID does not exist" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060479 4907 scope.go:117] "RemoveContainer" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: E0127 19:24:54.060880 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": container with ID starting with e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242 not found: ID does not exist" containerID="e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242" Jan 27 19:24:54 crc kubenswrapper[4907]: I0127 19:24:54.060932 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242"} err="failed to get container status \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": rpc error: code = NotFound desc = could not find container \"e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242\": container with ID starting with e18ee3a017bdfa4260078d157f0bcde34e145487098d5f73caf32dfaf6fb9242 not found: ID does not exist" Jan 27 19:24:55 crc kubenswrapper[4907]: I0127 19:24:55.763748 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" path="/var/lib/kubelet/pods/26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589/volumes" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.345645 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346674 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346690 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346707 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-utilities" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346715 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-utilities" Jan 27 19:25:24 crc kubenswrapper[4907]: E0127 19:25:24.346730 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-content" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346739 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="extract-content" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.346959 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a9f9c1-1d99-4b3a-9dc2-f0cae8d73589" containerName="registry-server" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.348069 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.358433 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359243 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359318 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.359316 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5d7cl" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.389016 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438127 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438198 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438350 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438551 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438834 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438891 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.438953 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.439002 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541585 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541676 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.541712 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542113 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542199 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542222 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542268 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542301 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542381 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542697 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542822 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542932 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.542982 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.543145 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.550144 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.553333 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.561663 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.565966 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.592267 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " pod="openstack/tempest-tests-tempest" Jan 27 19:25:24 crc kubenswrapper[4907]: I0127 19:25:24.680114 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:25:25 crc kubenswrapper[4907]: I0127 19:25:25.189545 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:25:25 crc kubenswrapper[4907]: I0127 19:25:25.277497 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerStarted","Data":"e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560"} Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.340751 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.344710 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.378546 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484318 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484375 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.484462 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.586846 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.586947 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.587161 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.598474 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.598594 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.611342 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"redhat-marketplace-mswtp\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:30 crc kubenswrapper[4907]: I0127 19:25:30.674188 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:25:36 crc kubenswrapper[4907]: I0127 19:25:36.912145 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:25:37 crc kubenswrapper[4907]: I0127 19:25:37.449498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763"} Jan 27 19:25:38 crc kubenswrapper[4907]: I0127 19:25:38.462649 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827" exitCode=0 Jan 27 19:25:38 crc kubenswrapper[4907]: I0127 19:25:38.463021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827"} Jan 27 19:25:38 crc kubenswrapper[4907]: E0127 19:25:38.659461 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8892b4ff_3ac2_4d8d_ac52_b4853cea55b5.slice/crio-conmon-724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8892b4ff_3ac2_4d8d_ac52_b4853cea55b5.slice/crio-724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:25:40 crc kubenswrapper[4907]: I0127 19:25:40.486746 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4"} Jan 27 19:25:42 crc kubenswrapper[4907]: I0127 19:25:42.514393 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4" exitCode=0 Jan 27 19:25:42 crc kubenswrapper[4907]: I0127 19:25:42.514444 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4"} Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097550 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.098277 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097874 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.098215 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:25:59 crc kubenswrapper[4907]: I0127 19:25:59.097590 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.609340 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.614256 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cd2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(019838dd-5c5f-40f0-a169-09156549d64c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:26:13 crc kubenswrapper[4907]: E0127 19:26:13.615589 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="019838dd-5c5f-40f0-a169-09156549d64c" Jan 27 19:26:14 crc kubenswrapper[4907]: I0127 19:26:14.301230 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerStarted","Data":"0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa"} Jan 27 19:26:14 crc kubenswrapper[4907]: E0127 19:26:14.303642 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="019838dd-5c5f-40f0-a169-09156549d64c" Jan 27 19:26:14 crc kubenswrapper[4907]: I0127 19:26:14.340983 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mswtp" podStartSLOduration=8.854357664 podStartE2EDuration="44.340966325s" podCreationTimestamp="2026-01-27 19:25:30 +0000 UTC" firstStartedPulling="2026-01-27 19:25:38.465765394 +0000 UTC m=+4793.595048006" lastFinishedPulling="2026-01-27 19:26:13.952374055 +0000 UTC m=+4829.081656667" observedRunningTime="2026-01-27 19:26:14.338350181 +0000 UTC m=+4829.467632833" watchObservedRunningTime="2026-01-27 19:26:14.340966325 +0000 UTC m=+4829.470248937" Jan 27 19:26:20 crc kubenswrapper[4907]: I0127 19:26:20.675039 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:20 crc kubenswrapper[4907]: I0127 19:26:20.675688 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:22 crc kubenswrapper[4907]: I0127 19:26:22.296502 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" probeResult="failure" output=< Jan 27 19:26:22 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:26:22 crc kubenswrapper[4907]: > Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.521104 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.521803 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:26:26 crc kubenswrapper[4907]: I0127 19:26:26.640519 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:26:29 crc kubenswrapper[4907]: I0127 19:26:29.457917 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerStarted","Data":"a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8"} Jan 27 19:26:29 crc kubenswrapper[4907]: I0127 19:26:29.492636 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.043851142 podStartE2EDuration="1m6.492616238s" podCreationTimestamp="2026-01-27 19:25:23 +0000 UTC" firstStartedPulling="2026-01-27 19:25:25.188234644 +0000 UTC m=+4780.317517256" lastFinishedPulling="2026-01-27 19:26:26.63699974 +0000 UTC m=+4841.766282352" observedRunningTime="2026-01-27 19:26:29.484102428 +0000 UTC m=+4844.613385040" watchObservedRunningTime="2026-01-27 19:26:29.492616238 +0000 UTC m=+4844.621898850" Jan 27 19:26:31 crc kubenswrapper[4907]: I0127 19:26:31.732004 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" probeResult="failure" output=< Jan 27 19:26:31 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:26:31 crc kubenswrapper[4907]: > Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.314003 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.375689 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:41 crc kubenswrapper[4907]: I0127 19:26:41.556300 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:42 crc kubenswrapper[4907]: I0127 19:26:42.612952 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mswtp" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" containerID="cri-o://0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" gracePeriod=2 Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.625640 4907 generic.go:334] "Generic (PLEG): container finished" podID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerID="0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" exitCode=0 Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.625719 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa"} Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.626077 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswtp" event={"ID":"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5","Type":"ContainerDied","Data":"9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763"} Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.626090 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9506bc4d9330db987c53766de95ed1b7f7f8e0c2c0e83af1c910638f6da62763" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.670363 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870292 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870361 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.870611 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") pod \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\" (UID: \"8892b4ff-3ac2-4d8d-ac52-b4853cea55b5\") " Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.872045 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities" (OuterVolumeSpecName: "utilities") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.883623 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n" (OuterVolumeSpecName: "kube-api-access-5xm8n") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "kube-api-access-5xm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.896944 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" (UID: "8892b4ff-3ac2-4d8d-ac52-b4853cea55b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.973751 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xm8n\" (UniqueName: \"kubernetes.io/projected/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-kube-api-access-5xm8n\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.974184 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4907]: I0127 19:26:43.974199 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.636059 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswtp" Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.676781 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:44 crc kubenswrapper[4907]: I0127 19:26:44.687080 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswtp"] Jan 27 19:26:45 crc kubenswrapper[4907]: I0127 19:26:45.761372 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" path="/var/lib/kubelet/pods/8892b4ff-3ac2-4d8d-ac52-b4853cea55b5/volumes" Jan 27 19:26:56 crc kubenswrapper[4907]: I0127 19:26:56.520979 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:26:56 crc kubenswrapper[4907]: I0127 19:26:56.521765 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.520960 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.521505 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.521589 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.522637 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:27:26 crc kubenswrapper[4907]: I0127 19:27:26.522684 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" gracePeriod=600 Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161251 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" exitCode=0 Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161349 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3"} Jan 27 19:27:27 crc kubenswrapper[4907]: I0127 19:27:27.161653 4907 scope.go:117] "RemoveContainer" containerID="6222c3bc9010664707041ca0cf77720bb8a9830dc3ace007bd46fdc2c8dccc41" Jan 27 19:27:28 crc kubenswrapper[4907]: I0127 19:27:28.173009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} Jan 27 19:29:56 crc kubenswrapper[4907]: I0127 19:29:56.525041 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:29:56 crc kubenswrapper[4907]: I0127 19:29:56.527052 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733540 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733596 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.733966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.734044 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.914799 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.914901 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955721 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955788 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955857 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955914 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.955990 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956011 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956164 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.956200 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.965472 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:57 crc kubenswrapper[4907]: I0127 19:29:57.965550 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.206885 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.479077 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podUID="a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678200 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678744 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678237 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:29:58 crc kubenswrapper[4907]: I0127 19:29:58.678910 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.753330 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.753333 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:02 crc kubenswrapper[4907]: I0127 19:30:02.987253 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.236849 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.241471 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-utilities" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.241504 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-utilities" Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.241989 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-content" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242000 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="extract-content" Jan 27 19:30:03 crc kubenswrapper[4907]: E0127 19:30:03.242029 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242035 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.242327 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8892b4ff-3ac2-4d8d-ac52-b4853cea55b5" containerName="registry-server" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.277664 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.301588 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.301587 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.344038 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.345460 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.345582 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448209 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448282 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.448312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.463315 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.527716 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.540100 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"collect-profiles-29492370-9xtk7\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.645392 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.750600 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:03 crc kubenswrapper[4907]: I0127 19:30:03.751611 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.159768 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.159805 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:04 crc kubenswrapper[4907]: I0127 19:30:04.324220 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.443179 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.443179 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.445296 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.445219 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.938424 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:05 crc kubenswrapper[4907]: I0127 19:30:05.938503 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.399778 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.830192 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:06 crc kubenswrapper[4907]: I0127 19:30:06.830256 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.465125 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.465447 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674826 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674898 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674903 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674958 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674985 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.675015 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.674929 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.675079 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:07 crc kubenswrapper[4907]: I0127 19:30:07.841887 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006827 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006875 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.007164 4907 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5q5h2 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.007183 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podUID="c53f2859-15de-4c57-81ba-539c7787b649" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.006786 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176873 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176967 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.176801 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.258800 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.258933 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.314327 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.314404 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341744 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341799 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341864 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.341885 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.424628 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.424931 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425114 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425154 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.425172 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.597009 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.673720 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.673720 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674076 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674235 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.674277 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.753022 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.753029 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755719 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755730 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755758 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755815 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.755865 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.756113 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.756808 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:08 crc kubenswrapper[4907]: I0127 19:30:08.839741 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.006813 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.006817 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171767 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171872 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171896 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171939 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171956 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.171994 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172011 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172334 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172368 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172378 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172435 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172439 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" podUID="a733096f-e99d-4186-8542-1d8cb16012d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172753 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172952 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173005 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173032 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173132 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173200 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173246 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.173275 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.172744 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394648 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394990 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.394742 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.395063 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460093 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460158 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460207 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.460239 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:09 crc kubenswrapper[4907]: I0127 19:30:09.748229 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-4xx4s" podUID="ee97e15a-ebc3-4c61-9841-9c1fb43fdee7" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.020831 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.021239 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112309 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112786 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112385 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.112956 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.503143 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.544393 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.544457 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.663531 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.663623 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.694655 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:10 crc kubenswrapper[4907]: I0127 19:30:10.694655 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.454879 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.455197 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754194 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754367 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754444 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.754498 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:11 crc kubenswrapper[4907]: I0127 19:30:11.793774 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podUID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.224817 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.224879 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.748340 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.752070 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.755161 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:12 crc kubenswrapper[4907]: I0127 19:30:12.945654 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081621 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081659 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081745 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.081905 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output=< Jan 27 19:30:13 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:30:13 crc kubenswrapper[4907]: > Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110798 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110909 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.110921 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.193883 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.193964 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.194589 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313388 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313467 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313575 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.313653 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.353644 4907 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zhq64 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.353715 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podUID="bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.693765 4907 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-4ngf2 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.693812 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podUID="70874c1f-da0d-4389-8021-fd3003150fff" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.748679 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.748679 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.749347 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.750519 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.751853 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.756802 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.825030 4907 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-r2fdr container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.825133 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podUID="8f62d8a1-62d1-4206-b061-f75c44ff2450" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.879385 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:13 crc kubenswrapper[4907]: I0127 19:30:13.907676 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.116708 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.321782 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.394933 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.394999 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.460302 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.460373 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.606218 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.606986 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.607054 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.646764 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.706073 4907 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.706131 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="2448dad5-d0f7-4335-a3fb-a23c5ef59bbf" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.808280 4907 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:14 crc kubenswrapper[4907]: I0127 19:30:14.808344 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="a9dc6389-0ad3-4259-aaf2-945493e66aa2" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385776 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385859 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385787 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385930 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385795 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.385970 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.766255 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l59wn" podUID="5f465d65-342c-410f-9374-d8c5ac6f03e0" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.938885 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:15 crc kubenswrapper[4907]: I0127 19:30:15.938960 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139676 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139743 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139673 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.139842 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.793040 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.793716 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887705 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887720 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887797 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:16 crc kubenswrapper[4907]: I0127 19:30:16.887802 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.464782 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.465199 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716760 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716837 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716892 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.716961 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717081 4907 patch_prober.go:28] interesting pod/downloads-7954f5f757-h79fx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717096 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717131 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h79fx" podUID="c8a31b60-14c7-4b73-a17f-60d101c0119b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.717173 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.720535 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.720591 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.724286 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.751837 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.757768 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-8jsvt" podUID="e6378a4c-96e5-4151-a0ca-c320fa9b667d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963826 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963894 4907 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5q5h2 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963868 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963939 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963979 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5q5h2" podUID="c53f2859-15de-4c57-81ba-539c7787b649" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.63:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.963904 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:17 crc kubenswrapper[4907]: I0127 19:30:17.964058 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.004771 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.087922 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.088014 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.088098 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130772 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130836 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.130892 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.171843 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" podUID="a05cfe48-4bf5-4199-aefa-de59259798c4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212838 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212903 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.212964 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213286 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213214 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.213427 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.253890 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" podUID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.253947 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.254021 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.254133 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.335861 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" podUID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.376923 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377004 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjhgn" podUID="e257f81e-9460-4391-a7a5-cca3fc9230d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377329 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377341 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377406 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377352 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377501 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" podUID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377493 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377539 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" podUID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377637 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377638 4907 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lfqhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.377699 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfqhn" podUID="d667690f-b387-424c-b130-e50277eaa0c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.428223 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" containerMessage="Container operator failed liveness probe, will be restarted" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.428594 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" containerID="cri-o://32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679" gracePeriod=30 Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.509845 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bf27l" podUID="a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673759 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mpgzf" podUID="f84f4e53-c1de-49a3-8435-5e4999a034fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673825 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" podUID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.673939 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678855 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678923 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678946 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.679023 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.678977 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.679137 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.691464 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.691528 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" containerID="cri-o://39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8" gracePeriod=30 Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.748870 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799904 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799957 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.799975 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800043 4907 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85nxl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800047 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7567458d64-vvlhm" podUID="12b8e76f-853f-4eeb-b6c5-e77d05bec357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800071 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800032 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85nxl" podUID="434d6d34-127a-4de6-8f5c-6ea67008f70a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800187 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-wvnrt" podUID="ba33cbc9-9a56-4c45-8c07-19b4110e03c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800237 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800257 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800296 4907 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-tb79g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:18 crc kubenswrapper[4907]: I0127 19:30:18.800326 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tb79g" podUID="486be3bf-a27f-4a44-97f3-751b782bee1f" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.130751 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.130807 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297761 4907 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-65v8r container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297823 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" podUID="99183c02-34c0-4a91-9e6e-0efd5d2a7a42" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297869 4907 patch_prober.go:28] interesting pod/router-default-5444994796-h72cm container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.297884 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h72cm" podUID="d427ba67-a9ef-41ef-a2f3-fbe9eb87a69e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.394526 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.394656 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418677 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418732 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.418783 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469173 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469230 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.469617 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.489063 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.681187 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.681260 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.750084 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.754035 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:19 crc kubenswrapper[4907]: I0127 19:30:19.782115 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.017119 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.017206 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112306 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112343 4907 patch_prober.go:28] interesting pod/route-controller-manager-8c88b6f67-gq6zl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112365 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.112400 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c88b6f67-gq6zl" podUID="4b0a63e6-0f9c-42b7-8006-fbd93909482e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.172037 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.172108 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.543958 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626734 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626748 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6f954ddc5b-fjchc" podUID="7707f450-bf8d-4e84-9baa-a02bc80a0b22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.627066 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.626786 4907 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7b8dfd4994-zw4xr container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.30:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.627130 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" podUID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.30:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.662878 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.662952 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.663604 4907 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2fplf container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.663682 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2fplf" podUID="dccc085e-3aae-4c8e-8737-699c60063730" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.737120 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.737350 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.779608 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4907]: I0127 19:30:20.779664 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.456477 4907 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.456654 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.670596 4907 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ljpb container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.670683 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-8ljpb" podUID="db7629bc-e5a1-44e1-9af4-ecc83acfda75" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753479 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753515 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753479 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vrcdt" podUID="8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.753603 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-wz7rn" podUID="1ec7dee3-a9ee-4bb8-b444-899c120854a7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:21 crc kubenswrapper[4907]: I0127 19:30:21.790756 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" podUID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.133726 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.134015 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.186747 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.227845 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-548b7f8fd-7zpsk" podUID="202ff14a-3733-4ccf-8202-94fac75bdfc4" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.270275 4907 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xld9m container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.270760 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xld9m" podUID="9f254819-bf2c-4c38-881f-8d12a0d56278" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.471902 4907 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.471974 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.750168 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.751239 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.753406 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.753445 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.772752 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 19:30:22 crc kubenswrapper[4907]: I0127 19:30:22.984801 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.148750 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.148750 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.231860 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.231942 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232046 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232328 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n9qqt" podUID="dd967d05-2ecd-4578-9c41-22e36ff088c1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.232647 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-zfszb" podUID="2ea123ce-4328-4379-8310-dbfff15acfbf" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.233737 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} pod="metallb-system/frr-k8s-csdnr" containerMessage="Container frr failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.233838 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-csdnr" podUID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerName="frr" containerID="cri-o://1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872" gracePeriod=2 Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.317333 4907 patch_prober.go:28] interesting pod/thanos-querier-c9f8b8df8-2gbm9 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.317390 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c9f8b8df8-2gbm9" podUID="8e0f501d-4ce7-4268-b84c-71e7a8a1b430" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.78:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.352456 4907 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-zhq64 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.352509 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-zhq64" podUID="bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.46:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.693334 4907 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-4ngf2 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.693427 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-4ngf2" podUID="70874c1f-da0d-4389-8021-fd3003150fff" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.48:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.714678 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.747954 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748008 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748263 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wz5df" podUID="0b5adf10-ea9c-48b5-bece-3ee8683423e3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.748970 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.750838 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.752472 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.752520 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.754525 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.755831 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mrpqf" podUID="7c6ac148-bc7a-4480-9155-8f78567a5070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.756509 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794405 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794463 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794474 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.794481 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.796212 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.797436 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-central-agent" containerID="cri-o://b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5" gracePeriod=30 Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.798191 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.820852 4907 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-r2fdr container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.820913 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-r2fdr" podUID="8f62d8a1-62d1-4206-b061-f75c44ff2450" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.47:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.878971 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:23 crc kubenswrapper[4907]: I0127 19:30:23.880040 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="edbdf1e9-d0d7-458d-8f5a-891ee37d7483" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.158734 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.159052 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.158746 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.310613 4907 trace.go:236] Trace[2137106824]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (27-Jan-2026 19:30:18.339) (total time: 5963ms): Jan 27 19:30:24 crc kubenswrapper[4907]: Trace[2137106824]: [5.963312054s] [5.963312054s] END Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.310618 4907 trace.go:236] Trace[379575846]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (27-Jan-2026 19:30:14.794) (total time: 9508ms): Jan 27 19:30:24 crc kubenswrapper[4907]: Trace[379575846]: [9.508132295s] [9.508132295s] END Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.343776 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-jfhbt" podUID="53565dd2-5a29-4ba0-9654-36b9600f765b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.13:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395247 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395307 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395675 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.49:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.395589 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459593 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459638 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459718 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.50:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.459650 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.556860 4907 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fqkck container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557258 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podUID="21da9305-e6ab-4378-b316-7a3ffc47faa0" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557094 4907 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fqkck container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.557344 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-fqkck" podUID="21da9305-e6ab-4378-b316-7a3ffc47faa0" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.623905 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerDied","Data":"1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872"} Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.623797 4907 generic.go:334] "Generic (PLEG): container finished" podID="3a1b45eb-7bdd-4172-99f0-b74eabce028d" containerID="1262f141b48da7795e7b6536b0148eb0b29160c91ad229eb5208ab0c76214872" exitCode=143 Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624717 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624846 4907 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624878 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b4b16e-4eff-46be-aac5-63d2b3d8fdf2" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.624951 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-597cv" podUID="aa958bdc-32c5-4e9f-841e-7427fdb87b31" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.627391 4907 generic.go:334] "Generic (PLEG): container finished" podID="a733096f-e99d-4186-8542-1d8cb16012d2" containerID="d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd" exitCode=1 Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.627428 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerDied","Data":"d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd"} Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.634549 4907 scope.go:117] "RemoveContainer" containerID="d9b11c82957494396cb5619801b7c27c5a306b8775088bf3a26c5585d8a7e6bd" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.704870 4907 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.704930 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="2448dad5-d0f7-4335-a3fb-a23c5ef59bbf" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.748914 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.752995 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.753121 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dv4j2" podUID="fdf800ed-f5e8-4478-9e7a-98c7c95c7c52" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.808038 4907 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:24 crc kubenswrapper[4907]: I0127 19:30:24.808158 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="a9dc6389-0ad3-4259-aaf2-945493e66aa2" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.201838 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.344800 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.345159 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.385902 4907 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87z2b container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.385966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-87z2b" podUID="5564598e-ff23-4f9e-b3de-64e127e94da6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386521 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386611 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386689 4907 patch_prober.go:28] interesting pod/metrics-server-7f448b7857-l4vhw container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386778 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.386987 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.393866 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.393965 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" podUID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerName="metrics-server" containerID="cri-o://c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7" gracePeriod=170 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.538752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wz5df" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.639709 4907 generic.go:334] "Generic (PLEG): container finished" podID="7f5a8eee-f06b-4376-90d6-ff3faef0e8af" containerID="67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.639776 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerDied","Data":"67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.641326 4907 generic.go:334] "Generic (PLEG): container finished" podID="c4a64f11-d6ef-487e-afa3-1d9bdbea9424" containerID="9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.641354 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerDied","Data":"9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.643435 4907 generic.go:334] "Generic (PLEG): container finished" podID="6347c63b-e1fb-4570-a350-68a9f9f1b79b" containerID="623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0" exitCode=1 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.643486 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerDied","Data":"623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645202 4907 scope.go:117] "RemoveContainer" containerID="9efd027ef1c377220fa8f340dbee3ae67ce228fe71bb1d54f0e67e85fdad2175" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645683 4907 generic.go:334] "Generic (PLEG): container finished" podID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerID="39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8" exitCode=0 Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.645711 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerDied","Data":"39c2b04a084c1f73feb7fedf35c2685fc18b62e0104e4a5612d7a513b08ecfe8"} Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.646809 4907 scope.go:117] "RemoveContainer" containerID="67602a1f42cb5fae5c0acf680123da146665fa2e7f522560e1b12b95218a72a6" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.658202 4907 scope.go:117] "RemoveContainer" containerID="623c303552551027985f664f3b1be20727aa9bf35473c5e129c5ce18b1e755d0" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.738099 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.738395 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c9228204-5d32-47ea-9236-8ae3e4d5eebc" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.169:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830177 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830234 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.830295 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.846395 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} pod="openshift-console/console-7b674f54c6-zhrj9" containerMessage="Container console failed liveness probe, will be restarted" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.936704 4907 patch_prober.go:28] interesting pod/monitoring-plugin-6596df577b-flw67 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.937086 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" podUID="c3e1c70a-dd32-4bc6-b7ec-6ec039441440" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:25 crc kubenswrapper[4907]: I0127 19:30:25.937275 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140037 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140117 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140133 4907 patch_prober.go:28] interesting pod/controller-manager-9f964d47c-l4mx8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.140192 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9f964d47c-l4mx8" podUID="48e5b57d-d01a-441e-beac-ef5e5d74dbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.164016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6596df577b-flw67" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.521538 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.521847 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.661243 4907 generic.go:334] "Generic (PLEG): container finished" podID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerID="d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5" exitCode=1 Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.661343 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerDied","Data":"d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5"} Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.662242 4907 scope.go:117] "RemoveContainer" containerID="d6f1ffc460e9a1b68dde86df48126cc9b5326fd8bc608e058ebd692fa28b61f5" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.752277 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.762959 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xc2fp" podUID="0a849662-db42-42f0-9317-eb3714b775d0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774468 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": read tcp 10.217.0.2:46454->10.217.0.102:8081: read: connection reset by peer" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774511 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": read tcp 10.217.0.2:46452->10.217.0.102:8081: read: connection reset by peer" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.774598 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.775446 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" podUID="018e0dfe-5282-40d5-87db-8551645d6e02" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.791354 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.791401 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.841932 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" podUID="f22de95d-f437-432c-917a-a08c082e02c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842011 4907 patch_prober.go:28] interesting pod/console-7b674f54c6-zhrj9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842030 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.842084 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.993326 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:26 crc kubenswrapper[4907]: I0127 19:30:26.993650 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.205716 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.206039 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" podUID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.212066 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-65v8r" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238482 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238619 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.238480 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.239028 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" podUID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.320502 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.320549 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.402078 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465459 4907 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-qb9qr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465852 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.465905 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.466906 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.466934 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" podUID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerName="authentication-operator" containerID="cri-o://cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57" gracePeriod=30 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.527727 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.527769 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.673728 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" event={"ID":"c4a64f11-d6ef-487e-afa3-1d9bdbea9424","Type":"ContainerStarted","Data":"2eeea9891ba496331cd3fd22e2dd09e9b59b08d5c9850923975af6681162f64d"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.675373 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.682838 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-csdnr" event={"ID":"3a1b45eb-7bdd-4172-99f0-b74eabce028d","Type":"ContainerStarted","Data":"7e72c78899397eb28b7f44e7716e8cfc6c0725ea73c548b23b002d5b14eecb74"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.683110 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h72cm" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.682928 4907 patch_prober.go:28] interesting pod/oauth-openshift-788784fd4b-j7f9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.683608 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.684001 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.684052 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.686470 4907 generic.go:334] "Generic (PLEG): container finished" podID="774ac09a-4164-4e22-9ea2-385ac4ef87eb" containerID="cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.686541 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerDied","Data":"cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.687821 4907 scope.go:117] "RemoveContainer" containerID="cd67c7484dd024f40584304f1743c918c2fca9cb0465132c65128cd9cb711873" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.690807 4907 generic.go:334] "Generic (PLEG): container finished" podID="bd2d065d-dd6e-43bc-a725-e7fe52c024b1" containerID="0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.690890 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerDied","Data":"0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.691671 4907 scope.go:117] "RemoveContainer" containerID="0ea3a0756688d726f061762e83cd00694fde87d8c1c2a0d6356745db391935da" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.712397 4907 generic.go:334] "Generic (PLEG): container finished" podID="812bcca3-8896-4492-86ff-1df596f0e604" containerID="32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679" exitCode=0 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.712482 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerDied","Data":"32651cc0d9f45bfb8a0657d8774cf718bdad12aa946b4f6a6c0e98678d496679"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.716526 4907 generic.go:334] "Generic (PLEG): container finished" podID="9a776a10-0883-468e-a8d3-087ca6429b1b" containerID="e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.716648 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerDied","Data":"e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.718586 4907 scope.go:117] "RemoveContainer" containerID="e5df74a29f441c00381140ee9c5bf88402dcab24c0e3e0599caea608cfb497d9" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.718975 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" event={"ID":"a733096f-e99d-4186-8542-1d8cb16012d2","Type":"ContainerStarted","Data":"66c71f667fae0c6a02c528ba290895d59ae9ee3b3ece0ee42b91b220c283c810"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.719352 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.723209 4907 generic.go:334] "Generic (PLEG): container finished" podID="018e0dfe-5282-40d5-87db-8551645d6e02" containerID="b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9" exitCode=1 Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.723324 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerDied","Data":"b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.724868 4907 scope.go:117] "RemoveContainer" containerID="b5aa252e15e301a390a646e1dc30e8c068a761a272a7ac092776578f3920eba9" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.729218 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" event={"ID":"7ca8f687-0e6e-4df7-8dc1-0bb597588b6d","Type":"ContainerStarted","Data":"34abe2ba07118423357146528ed4139f9cc106258253d30ccd6322e1de78d314"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.729976 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.730006 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.730066 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.736885 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" event={"ID":"7f5a8eee-f06b-4376-90d6-ff3faef0e8af","Type":"ContainerStarted","Data":"5f2a4d065111347b84e91863ec562e48004c46fef945ae10a6499abec2ff956f"} Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.737277 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.749727 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="adac6b31-6901-4af8-bc21-648d56318021" containerName="prometheus" probeResult="failure" output="command timed out" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.749747 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.815760 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:27 crc kubenswrapper[4907]: I0127 19:30:27.816107 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024813 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024844 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" podUID="e9f20d2f-16bf-49df-9c41-6fd6faa6ef67" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.024927 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025042 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025065 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025091 4907 patch_prober.go:28] interesting pod/console-operator-58897d9998-bjfcf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025104 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bjfcf" podUID="1c678cbb-a03d-4ed8-85bd-befc2884454e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.025161 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.361118 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.587464 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.587746 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-fljbt" podUID="24caa967-ac26-4666-bf41-e2c4bc6ebb0f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.660373 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-4nlx7" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.751294 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" event={"ID":"8a6e2a40-e233-4dbe-9b63-0fecf3fc1487","Type":"ContainerStarted","Data":"80e4bcdfd7d70a4a810ced5f2b5cfce7d51602948b748f79cce44bc3fb1f2d60"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.751506 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.753381 4907 generic.go:334] "Generic (PLEG): container finished" podID="f1ed42c6-98ac-41b8-96df-24919c0f9837" containerID="3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.753446 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerDied","Data":"3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.755699 4907 generic.go:334] "Generic (PLEG): container finished" podID="bc6ebe7e-320a-4193-8db4-3d4574ba1c3b" containerID="fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.755741 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerDied","Data":"fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.757013 4907 scope.go:117] "RemoveContainer" containerID="fdefe0078798864fd86efd52e2d0b196ae938ad85159ea735c3bfc8ec988c404" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.766176 4907 generic.go:334] "Generic (PLEG): container finished" podID="a05cfe48-4bf5-4199-aefa-de59259798c4" containerID="5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e" exitCode=1 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.766251 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerDied","Data":"5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.768514 4907 scope.go:117] "RemoveContainer" containerID="3605e3de4992657560adcedd6736025307c02ec192c2480d862bfcd2d5259408" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.779299 4907 scope.go:117] "RemoveContainer" containerID="5689924b2146070aa42522ec58218e2f214b9c2865a1996704d145530362175e" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.802331 4907 generic.go:334] "Generic (PLEG): container finished" podID="c2d359e7-9de4-4357-ae4c-8da07c1a880c" containerID="cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57" exitCode=0 Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.802468 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerDied","Data":"cc01864bb4f8a1120f92173489f6efaf64dc66769dbd5d75c406ce52e4f84c57"} Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.805900 4907 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nrdnf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.805938 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" podUID="7ca8f687-0e6e-4df7-8dc1-0bb597588b6d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.831745 4907 trace.go:236] Trace[1246640287]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (27-Jan-2026 19:30:27.773) (total time: 1058ms): Jan 27 19:30:28 crc kubenswrapper[4907]: Trace[1246640287]: [1.058200993s] [1.058200993s] END Jan 27 19:30:28 crc kubenswrapper[4907]: I0127 19:30:28.860798 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" podUID="277579e8-58c3-4ad7-b902-e62f045ba8c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.012922 4907 trace.go:236] Trace[1645195763]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (27-Jan-2026 19:30:27.689) (total time: 1323ms): Jan 27 19:30:29 crc kubenswrapper[4907]: Trace[1645195763]: [1.323317005s] [1.323317005s] END Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.394654 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-njxl9 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.394946 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-njxl9" podUID="faf9da31-9bbb-43b4-9cc1-a80f95392ccf" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.49:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.465829 4907 patch_prober.go:28] interesting pod/logging-loki-gateway-795ff9d55b-mwm5k container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.465890 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-795ff9d55b-mwm5k" podUID="d57b015c-f3fc-424d-b910-96e63c6da31a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.50:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.479676 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.862696 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" event={"ID":"bd2d065d-dd6e-43bc-a725-e7fe52c024b1","Type":"ContainerStarted","Data":"f9381f1e02136e207f2ee8f3be5aebc0285af746b7e7d1deece6f0da3a8538ed"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.863280 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.913030 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" event={"ID":"f1ed42c6-98ac-41b8-96df-24919c0f9837","Type":"ContainerStarted","Data":"b0ca80dfbf17362ccb5cc75ed398cde1df7189cb54e38ad9b78cd000c58a42bd"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.914378 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.931498 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" event={"ID":"9a776a10-0883-468e-a8d3-087ca6429b1b","Type":"ContainerStarted","Data":"b76d2fa4132b926c053991ec9229a853e2f66ad2189e4f897765e85dbec0b63d"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.932358 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.968725 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" event={"ID":"018e0dfe-5282-40d5-87db-8551645d6e02","Type":"ContainerStarted","Data":"1a0c53bd8db41eb6a071ec999505e36a82474a27d4fa122750df878996505807"} Jan 27 19:30:29 crc kubenswrapper[4907]: I0127 19:30:29.974348 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.000118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" event={"ID":"774ac09a-4164-4e22-9ea2-385ac4ef87eb","Type":"ContainerStarted","Data":"1e82eea0a3b9f0d7ce12ae9f179109387de1e78ee9f4db9a68e8e73c7bff2227"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.001194 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.035858 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" event={"ID":"6347c63b-e1fb-4570-a350-68a9f9f1b79b","Type":"ContainerStarted","Data":"f632838810d641669fb0b49dfc60ada952cc16b653c17272fb75b415fce7ce8c"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.035889 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.040840 4907 generic.go:334] "Generic (PLEG): container finished" podID="f22de95d-f437-432c-917a-a08c082e02c4" containerID="629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06" exitCode=1 Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.041762 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerDied","Data":"629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.051362 4907 scope.go:117] "RemoveContainer" containerID="629e463a589c9cd19a0c4f9024b2b0a5c378af295a1f0de861335384cb35ab06" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.054873 4907 generic.go:334] "Generic (PLEG): container finished" podID="f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b" containerID="eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e" exitCode=1 Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.054940 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerDied","Data":"eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.061236 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" event={"ID":"bc6ebe7e-320a-4193-8db4-3d4574ba1c3b","Type":"ContainerStarted","Data":"448f7f7cbdec6aecab43fdbad5699810dd48adba4c8b205cbd11a867abb8d56e"} Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.062400 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 19:30:30 crc kubenswrapper[4907]: I0127 19:30:30.069277 4907 scope.go:117] "RemoveContainer" containerID="eedd421bc5c7c3d1953d09a14f5c71ef59a435eb019a1187fa9fd5e00be2a59e" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.097069 4907 generic.go:334] "Generic (PLEG): container finished" podID="8cc0b779-ca13-49be-91c1-ea2eb4a99d9c" containerID="b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5" exitCode=0 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.097118 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerDied","Data":"b4e2bae231d1e2ccce2f31b0049e3caad088021caaaace02895e084bde83eeb5"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102261 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" event={"ID":"812bcca3-8896-4492-86ff-1df596f0e604","Type":"ContainerStarted","Data":"9c7707486c5f1175326256d08c0328b8d1cfc427d081243107265e7bf96f7ccc"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102939 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.102987 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.103011 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.107686 4907 generic.go:334] "Generic (PLEG): container finished" podID="a4aa00b3-8a54-4f84-907d-34a73b93944f" containerID="4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd" exitCode=1 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.107784 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerDied","Data":"4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.109297 4907 scope.go:117] "RemoveContainer" containerID="4cfb754c9a23cd806c6f62d79042c13099fd3acb70f4b669e95dbd00fafa1efd" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.122909 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" event={"ID":"a05cfe48-4bf5-4199-aefa-de59259798c4","Type":"ContainerStarted","Data":"ed018f6702dc598ff92e7eed4585570a267b8a651ab1ad783d28f16839530a48"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.123238 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.128895 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qb9qr" event={"ID":"c2d359e7-9de4-4357-ae4c-8da07c1a880c","Type":"ContainerStarted","Data":"a59cd9310b04d2c3030d9c9137668d4d482a1a3e8db5e305ae5b66810894471c"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.136690 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" event={"ID":"f22de95d-f437-432c-917a-a08c082e02c4","Type":"ContainerStarted","Data":"e020b203bae7108f7330d39e69972e7d8154282b778b55c83cde88eb9abd4348"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.137673 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.146518 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" event={"ID":"f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b","Type":"ContainerStarted","Data":"6af4de647cca09b70f016ca9c69666adf0119f1f8ae8673efb6e177ed67e3974"} Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.147151 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.527771 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" containerID="cri-o://641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" gracePeriod=23 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.759289 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" containerID="cri-o://1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d" gracePeriod=22 Jan 27 19:30:31 crc kubenswrapper[4907]: I0127 19:30:31.947633 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.160854 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gfl97" event={"ID":"a4aa00b3-8a54-4f84-907d-34a73b93944f","Type":"ContainerStarted","Data":"12893a0063c938ff84fabbbadb9431ecd0cb42b57fe429141f92bcf7596b7b46"} Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.161856 4907 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7x4fp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" start-of-body= Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.161938 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" podUID="812bcca3-8896-4492-86ff-1df596f0e604" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.671647 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.683113 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.691912 4907 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 27 19:30:32 crc kubenswrapper[4907]: E0127 19:30:32.691971 4907 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerName="galera" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.753736 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerName="galera" probeResult="failure" output="command timed out" Jan 27 19:30:32 crc kubenswrapper[4907]: I0127 19:30:32.779719 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-csdnr" Jan 27 19:30:33 crc kubenswrapper[4907]: I0127 19:30:33.075670 4907 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" podUID="8a6e2a40-e233-4dbe-9b63-0fecf3fc1487" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Jan 27 19:30:33 crc kubenswrapper[4907]: I0127 19:30:33.183768 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cc0b779-ca13-49be-91c1-ea2eb4a99d9c","Type":"ContainerStarted","Data":"84182b961cc5aee06dab4663b064c9014f34de7dacf97a434dd8c57dc54ad909"} Jan 27 19:30:34 crc kubenswrapper[4907]: I0127 19:30:34.677901 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.006456 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7"] Jan 27 19:30:35 crc kubenswrapper[4907]: W0127 19:30:35.196815 4907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a233285_9953_450d_a8a9_b7dc65737a09.slice/crio-3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441 WatchSource:0}: Error finding container 3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441: Status 404 returned error can't find the container with id 3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441 Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.211437 4907 generic.go:334] "Generic (PLEG): container finished" podID="0b24ac54-7ca4-4b1a-b26c-41ce82025599" containerID="641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941" exitCode=0 Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.211485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerDied","Data":"641b5e7c557227e0f34d068ecbb86ed3c19d649b1a3820d27d4203ab008cf941"} Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.805015 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c754559d6-wt8dc" Jan 27 19:30:35 crc kubenswrapper[4907]: I0127 19:30:35.876262 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.226067 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerStarted","Data":"6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.226358 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerStarted","Data":"3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.231747 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0b24ac54-7ca4-4b1a-b26c-41ce82025599","Type":"ContainerStarted","Data":"660c28f22184627338c57d8f4762e86d2f2775b412814b98dc5dce3a067ac3b8"} Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.290140 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" podStartSLOduration=36.288805291 podStartE2EDuration="36.288805291s" podCreationTimestamp="2026-01-27 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:30:36.277014067 +0000 UTC m=+5091.406296679" watchObservedRunningTime="2026-01-27 19:30:36.288805291 +0000 UTC m=+5091.418087903" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.787089 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.793108 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-nznnn" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.803127 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-7hgqc" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.807863 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6lprh" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.808016 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7x4fp" Jan 27 19:30:36 crc kubenswrapper[4907]: I0127 19:30:36.893575 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-b29cj" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.043767 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-hb2q7" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.171637 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mst5f" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.207876 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-9t69q" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.226861 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-l2pdl" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.247713 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-fnh99" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.354238 4907 generic.go:334] "Generic (PLEG): container finished" podID="e57d2b03-9116-4a79-bfc2-5b802cf62910" containerID="1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d" exitCode=0 Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.354535 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerDied","Data":"1ace59d89fc8097fca650f5dd330c7a4a02797cb0386774384bb0ef81ec64e5d"} Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.397696 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-tn4d6" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.492666 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.562233 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ph8fw" Jan 27 19:30:37 crc kubenswrapper[4907]: I0127 19:30:37.718864 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nrdnf" Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.373820 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e57d2b03-9116-4a79-bfc2-5b802cf62910","Type":"ContainerStarted","Data":"512d82b9b7ca402fb2e90662c6f6e6f6fd33ab87f4263b765821c926f7c25ec4"} Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.377992 4907 generic.go:334] "Generic (PLEG): container finished" podID="4a233285-9953-450d-a8a9-b7dc65737a09" containerID="6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035" exitCode=0 Jan 27 19:30:38 crc kubenswrapper[4907]: I0127 19:30:38.378043 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerDied","Data":"6c218baa2a032bad9616784e44f52bf56ac92d62e4deb36a19b1d6cf6a7ce035"} Jan 27 19:30:39 crc kubenswrapper[4907]: I0127 19:30:39.511385 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7b8dfd4994-zw4xr" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.595102 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.680966 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.681093 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.687619 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.688025 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" containerID="cri-o://6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada" gracePeriod=30 Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.714606 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.715059 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.715132 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") pod \"4a233285-9953-450d-a8a9-b7dc65737a09\" (UID: \"4a233285-9953-450d-a8a9-b7dc65737a09\") " Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.717949 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.773224 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.776840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58" (OuterVolumeSpecName: "kube-api-access-6ph58") pod "4a233285-9953-450d-a8a9-b7dc65737a09" (UID: "4a233285-9953-450d-a8a9-b7dc65737a09"). InnerVolumeSpecName "kube-api-access-6ph58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827162 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a233285-9953-450d-a8a9-b7dc65737a09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827213 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a233285-9953-450d-a8a9-b7dc65737a09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:40 crc kubenswrapper[4907]: I0127 19:30:40.827230 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ph58\" (UniqueName: \"kubernetes.io/projected/4a233285-9953-450d-a8a9-b7dc65737a09-kube-api-access-6ph58\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.177784 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.178303 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.413973 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" event={"ID":"4a233285-9953-450d-a8a9-b7dc65737a09","Type":"ContainerDied","Data":"3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441"} Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.414477 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-9xtk7" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.419189 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e00c0bb948e32b145b30ce0dd3b1f0e9a59fbc52d8713a2c734e1ed66074441" Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.739785 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 19:30:41 crc kubenswrapper[4907]: I0127 19:30:41.763656 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-rhbh6"] Jan 27 19:30:42 crc kubenswrapper[4907]: I0127 19:30:42.654201 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:42 crc kubenswrapper[4907]: I0127 19:30:42.654647 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.086752 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9" Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.096487 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" podUID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerName="oauth-openshift" containerID="cri-o://584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9" gracePeriod=15 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.442049 4907 generic.go:334] "Generic (PLEG): container finished" podID="b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f" containerID="584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9" exitCode=0 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.442149 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerDied","Data":"584633d93075f3cab246f00b53f57b6d6dbc4bb552695d874bc24adb82e896e9"} Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.444051 4907 generic.go:334] "Generic (PLEG): container finished" podID="621bccf6-c3e9-4b2d-821b-217848191c27" containerID="6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada" exitCode=0 Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.444096 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerDied","Data":"6d6b1006ad0099555abb70becddbbd89eb4c1824b203c60887a680fde2c3dada"} Jan 27 19:30:43 crc kubenswrapper[4907]: I0127 19:30:43.804714 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fea3de-b1db-4c31-8636-329b2d296f02" path="/var/lib/kubelet/pods/a8fea3de-b1db-4c31-8636-329b2d296f02/volumes" Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.512788 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" event={"ID":"b8a8fcf5-2457-47d4-9f00-6aad27a2cc1f","Type":"ContainerStarted","Data":"d8633d387b7e48ef0d9854b44d24d05af7e0a2ff93afe9d37d5aceefcb36ff39"} Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.517501 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:45 crc kubenswrapper[4907]: I0127 19:30:45.818163 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788784fd4b-j7f9b" Jan 27 19:30:46 crc kubenswrapper[4907]: I0127 19:30:46.527455 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"621bccf6-c3e9-4b2d-821b-217848191c27","Type":"ContainerStarted","Data":"460ae35a74b5a71c62f19fa9ad954c218661ff94bf5e67271f5521cdf31822d7"} Jan 27 19:30:47 crc kubenswrapper[4907]: I0127 19:30:47.276299 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.411419 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b674f54c6-zhrj9" podUID="a2362241-225f-40e2-9be3-67766a65316b" containerName="console" containerID="cri-o://20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615" gracePeriod=15 Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.616859 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b674f54c6-zhrj9_a2362241-225f-40e2-9be3-67766a65316b/console/0.log" Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.617213 4907 generic.go:334] "Generic (PLEG): container finished" podID="a2362241-225f-40e2-9be3-67766a65316b" containerID="20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615" exitCode=2 Jan 27 19:30:51 crc kubenswrapper[4907]: I0127 19:30:51.617277 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerDied","Data":"20a55c416798d5f6571433d370c7db5061a997e42c61cb7e7765b2828ccc1615"} Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.294824 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.648670 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b674f54c6-zhrj9_a2362241-225f-40e2-9be3-67766a65316b/console/0.log" Jan 27 19:30:52 crc kubenswrapper[4907]: I0127 19:30:52.648732 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b674f54c6-zhrj9" event={"ID":"a2362241-225f-40e2-9be3-67766a65316b","Type":"ContainerStarted","Data":"25860d96ab09486b84ce9683bdf1c2b971b91a2b1fa03a3f523c812227772bf9"} Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.830361 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.831166 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:55 crc kubenswrapper[4907]: I0127 19:30:55.835209 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.522056 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.529103 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.529178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.530292 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.530362 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" gracePeriod=600 Jan 27 19:30:56 crc kubenswrapper[4907]: E0127 19:30:56.711686 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.730071 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" exitCode=0 Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.731091 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b"} Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.735754 4907 scope.go:117] "RemoveContainer" containerID="3d879cf7c5d2fcb8a489f4aa5d271325a745968acc2244b4d8143e80b0256eb3" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.736372 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:30:56 crc kubenswrapper[4907]: E0127 19:30:56.737465 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:30:56 crc kubenswrapper[4907]: I0127 19:30:56.738283 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b674f54c6-zhrj9" Jan 27 19:30:57 crc kubenswrapper[4907]: I0127 19:30:57.301034 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:00 crc kubenswrapper[4907]: I0127 19:31:00.754910 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6858498495-rcqbh" Jan 27 19:31:02 crc kubenswrapper[4907]: I0127 19:31:02.229378 4907 scope.go:117] "RemoveContainer" containerID="12ee584c52e810bd9eb16f6197a94605fc43b3769760895d2e0825f38ee71acc" Jan 27 19:31:02 crc kubenswrapper[4907]: I0127 19:31:02.299039 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:07 crc kubenswrapper[4907]: I0127 19:31:07.309726 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:10 crc kubenswrapper[4907]: I0127 19:31:10.748542 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:10 crc kubenswrapper[4907]: E0127 19:31:10.750814 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:12 crc kubenswrapper[4907]: I0127 19:31:12.297923 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:17 crc kubenswrapper[4907]: I0127 19:31:17.302651 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="621bccf6-c3e9-4b2d-821b-217848191c27" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.126184 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.519380 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 19:31:19 crc kubenswrapper[4907]: I0127 19:31:19.904990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 19:31:20 crc kubenswrapper[4907]: I0127 19:31:20.003593 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 19:31:22 crc kubenswrapper[4907]: I0127 19:31:22.315688 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.060590 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:23 crc kubenswrapper[4907]: E0127 19:31:23.067485 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.067523 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.069122 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a233285-9953-450d-a8a9-b7dc65737a09" containerName="collect-profiles" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.086143 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.174057 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203317 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203667 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.203720 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306288 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306394 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.306414 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.311644 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.315161 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.345475 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"certified-operators-5v8dc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:23 crc kubenswrapper[4907]: I0127 19:31:23.442469 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:25 crc kubenswrapper[4907]: I0127 19:31:25.497160 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:25 crc kubenswrapper[4907]: I0127 19:31:25.800393 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:25 crc kubenswrapper[4907]: E0127 19:31:25.801411 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448134 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78" exitCode=0 Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448210 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78"} Jan 27 19:31:26 crc kubenswrapper[4907]: I0127 19:31:26.448256 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8"} Jan 27 19:31:28 crc kubenswrapper[4907]: I0127 19:31:28.472241 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1"} Jan 27 19:31:30 crc kubenswrapper[4907]: I0127 19:31:30.521945 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1" exitCode=0 Jan 27 19:31:30 crc kubenswrapper[4907]: I0127 19:31:30.522009 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1"} Jan 27 19:31:31 crc kubenswrapper[4907]: I0127 19:31:31.543475 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerStarted","Data":"7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7"} Jan 27 19:31:31 crc kubenswrapper[4907]: I0127 19:31:31.568354 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5v8dc" podStartSLOduration=4.940794692 podStartE2EDuration="9.566417816s" podCreationTimestamp="2026-01-27 19:31:22 +0000 UTC" firstStartedPulling="2026-01-27 19:31:26.451480014 +0000 UTC m=+5141.580762626" lastFinishedPulling="2026-01-27 19:31:31.077103138 +0000 UTC m=+5146.206385750" observedRunningTime="2026-01-27 19:31:31.563331518 +0000 UTC m=+5146.692614130" watchObservedRunningTime="2026-01-27 19:31:31.566417816 +0000 UTC m=+5146.695700418" Jan 27 19:31:33 crc kubenswrapper[4907]: I0127 19:31:33.442990 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:33 crc kubenswrapper[4907]: I0127 19:31:33.444981 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:34 crc kubenswrapper[4907]: I0127 19:31:34.501021 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" probeResult="failure" output=< Jan 27 19:31:34 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:31:34 crc kubenswrapper[4907]: > Jan 27 19:31:39 crc kubenswrapper[4907]: I0127 19:31:39.748215 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:39 crc kubenswrapper[4907]: E0127 19:31:39.749134 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:44 crc kubenswrapper[4907]: I0127 19:31:44.512352 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" probeResult="failure" output=< Jan 27 19:31:44 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:31:44 crc kubenswrapper[4907]: > Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.512072 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.598017 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:53 crc kubenswrapper[4907]: I0127 19:31:53.748996 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:31:53 crc kubenswrapper[4907]: E0127 19:31:53.749401 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:31:54 crc kubenswrapper[4907]: I0127 19:31:54.236553 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:54 crc kubenswrapper[4907]: I0127 19:31:54.853851 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5v8dc" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" containerID="cri-o://7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" gracePeriod=2 Jan 27 19:31:55 crc kubenswrapper[4907]: I0127 19:31:55.870037 4907 generic.go:334] "Generic (PLEG): container finished" podID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerID="7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" exitCode=0 Jan 27 19:31:55 crc kubenswrapper[4907]: I0127 19:31:55.870434 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7"} Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.081668 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.208825 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.208958 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.209067 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") pod \"9ff3f9b6-886f-4900-ba62-3d79659faabc\" (UID: \"9ff3f9b6-886f-4900-ba62-3d79659faabc\") " Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.213632 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities" (OuterVolumeSpecName: "utilities") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.231017 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5" (OuterVolumeSpecName: "kube-api-access-ghgf5") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "kube-api-access-ghgf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.289053 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ff3f9b6-886f-4900-ba62-3d79659faabc" (UID: "9ff3f9b6-886f-4900-ba62-3d79659faabc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312330 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312365 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff3f9b6-886f-4900-ba62-3d79659faabc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.312376 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgf5\" (UniqueName: \"kubernetes.io/projected/9ff3f9b6-886f-4900-ba62-3d79659faabc-kube-api-access-ghgf5\") on node \"crc\" DevicePath \"\"" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885205 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v8dc" event={"ID":"9ff3f9b6-886f-4900-ba62-3d79659faabc","Type":"ContainerDied","Data":"ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8"} Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885518 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v8dc" Jan 27 19:31:56 crc kubenswrapper[4907]: I0127 19:31:56.885545 4907 scope.go:117] "RemoveContainer" containerID="7e228de8065ba24d932f35b70c422671f75346f9d055ceab258f623e549051f7" Jan 27 19:31:57 crc kubenswrapper[4907]: E0127 19:31:57.031002 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff3f9b6_886f_4900_ba62_3d79659faabc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff3f9b6_886f_4900_ba62_3d79659faabc.slice/crio-ab68471ed2684e634387c8ca1ec2775e5543154e0c7cf797190737989c85b0b8\": RecentStats: unable to find data in memory cache]" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.249915 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.253699 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5v8dc"] Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.285889 4907 scope.go:117] "RemoveContainer" containerID="427b9ab2ff1a8a4b18f5147a5b9222bdad775d4df84fd61e2724195366d33ec1" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.330175 4907 scope.go:117] "RemoveContainer" containerID="23a5dfb16c39f0d53573f1a759dd39f9246437e9f6be389b68a62fa182f3da78" Jan 27 19:31:57 crc kubenswrapper[4907]: I0127 19:31:57.762273 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" path="/var/lib/kubelet/pods/9ff3f9b6-886f-4900-ba62-3d79659faabc/volumes" Jan 27 19:32:02 crc kubenswrapper[4907]: I0127 19:32:02.731657 4907 scope.go:117] "RemoveContainer" containerID="13a97c40306874fbe0355ba3ac69117ced9fe9a46d143b9fdf1bd111583618a4" Jan 27 19:32:02 crc kubenswrapper[4907]: I0127 19:32:02.790901 4907 scope.go:117] "RemoveContainer" containerID="724fc75becd97a8733f10cbbc65e1b699e3133ea3278947d01cef531ff695827" Jan 27 19:32:08 crc kubenswrapper[4907]: I0127 19:32:08.749197 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:08 crc kubenswrapper[4907]: E0127 19:32:08.751113 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:21 crc kubenswrapper[4907]: I0127 19:32:21.748766 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:21 crc kubenswrapper[4907]: E0127 19:32:21.749959 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:26 crc kubenswrapper[4907]: I0127 19:32:26.274697 4907 generic.go:334] "Generic (PLEG): container finished" podID="019838dd-5c5f-40f0-a169-09156549d64c" containerID="a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8" exitCode=1 Jan 27 19:32:26 crc kubenswrapper[4907]: I0127 19:32:26.274809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerDied","Data":"a46b349017119500621dd5d81eceaf280f07e4849a6fbfdb2535471de47390a8"} Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.112062 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.243944 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244062 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244108 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244178 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244348 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244431 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244482 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244544 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.244642 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"019838dd-5c5f-40f0-a169-09156549d64c\" (UID: \"019838dd-5c5f-40f0-a169-09156549d64c\") " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.246850 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data" (OuterVolumeSpecName: "config-data") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.247971 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.250750 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.254095 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t" (OuterVolumeSpecName: "kube-api-access-2cd2t") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "kube-api-access-2cd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.254328 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.281998 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.288521 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.290701 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.304965 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"019838dd-5c5f-40f0-a169-09156549d64c","Type":"ContainerDied","Data":"e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560"} Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.305011 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e485939d601422021124194f41b2edb21d01ebcfbafc4ed78de76b707da03560" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.305033 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.321844 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "019838dd-5c5f-40f0-a169-09156549d64c" (UID: "019838dd-5c5f-40f0-a169-09156549d64c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348482 4907 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348675 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cd2t\" (UniqueName: \"kubernetes.io/projected/019838dd-5c5f-40f0-a169-09156549d64c-kube-api-access-2cd2t\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348690 4907 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/019838dd-5c5f-40f0-a169-09156549d64c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348703 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348715 4907 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/019838dd-5c5f-40f0-a169-09156549d64c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348724 4907 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348734 4907 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.348742 4907 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/019838dd-5c5f-40f0-a169-09156549d64c-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.350156 4907 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.382061 4907 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 19:32:28 crc kubenswrapper[4907]: I0127 19:32:28.452135 4907 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:35 crc kubenswrapper[4907]: I0127 19:32:35.757715 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:35 crc kubenswrapper[4907]: E0127 19:32:35.758662 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.577894 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583053 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583104 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583146 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-content" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583160 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-content" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583207 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-utilities" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583222 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="extract-utilities" Jan 27 19:32:40 crc kubenswrapper[4907]: E0127 19:32:40.583268 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.583281 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.584238 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="019838dd-5c5f-40f0-a169-09156549d64c" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.584291 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff3f9b6-886f-4900-ba62-3d79659faabc" containerName="registry-server" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.587795 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.590709 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.598307 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5d7cl" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.657959 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.658241 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.761192 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.761478 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.763542 4907 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.795470 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chhg\" (UniqueName: \"kubernetes.io/projected/9364bcb6-d99e-42e9-9f1a-58054d2a59ab-kube-api-access-8chhg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.837625 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9364bcb6-d99e-42e9-9f1a-58054d2a59ab\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:40 crc kubenswrapper[4907]: I0127 19:32:40.933864 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:32:41 crc kubenswrapper[4907]: I0127 19:32:41.446735 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:32:41 crc kubenswrapper[4907]: I0127 19:32:41.484627 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9364bcb6-d99e-42e9-9f1a-58054d2a59ab","Type":"ContainerStarted","Data":"7edd0bb512134bead825dc1eb3c9eb0bfd7d46c4cb8f42e46fe3224200323ddd"} Jan 27 19:32:43 crc kubenswrapper[4907]: I0127 19:32:43.514381 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9364bcb6-d99e-42e9-9f1a-58054d2a59ab","Type":"ContainerStarted","Data":"58fafef6bcfd0c50884791b2af0b4f4bd1299c718f4b7731231e7b8e721c19d6"} Jan 27 19:32:43 crc kubenswrapper[4907]: I0127 19:32:43.536799 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.115243402 podStartE2EDuration="3.536780733s" podCreationTimestamp="2026-01-27 19:32:40 +0000 UTC" firstStartedPulling="2026-01-27 19:32:41.460715137 +0000 UTC m=+5216.589997749" lastFinishedPulling="2026-01-27 19:32:42.882252468 +0000 UTC m=+5218.011535080" observedRunningTime="2026-01-27 19:32:43.530469415 +0000 UTC m=+5218.659752037" watchObservedRunningTime="2026-01-27 19:32:43.536780733 +0000 UTC m=+5218.666063355" Jan 27 19:32:47 crc kubenswrapper[4907]: I0127 19:32:47.751018 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:47 crc kubenswrapper[4907]: E0127 19:32:47.751955 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:32:55 crc kubenswrapper[4907]: I0127 19:32:55.668582 4907 generic.go:334] "Generic (PLEG): container finished" podID="562a795f-c556-42b2-a9a3-0baf8b3ce4c5" containerID="c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7" exitCode=0 Jan 27 19:32:55 crc kubenswrapper[4907]: I0127 19:32:55.669064 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerDied","Data":"c13b56b4336fe54ce350cf735e6495e7b316df8aecab8e8659bd933cbe92b3a7"} Jan 27 19:32:56 crc kubenswrapper[4907]: I0127 19:32:56.697388 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" event={"ID":"562a795f-c556-42b2-a9a3-0baf8b3ce4c5","Type":"ContainerStarted","Data":"73e89c3734604a874e28a202e8b53b97965fe17e4a85b2841a460f720c7962ab"} Jan 27 19:32:58 crc kubenswrapper[4907]: I0127 19:32:58.748497 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:32:58 crc kubenswrapper[4907]: E0127 19:32:58.749703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:02 crc kubenswrapper[4907]: I0127 19:33:02.965043 4907 scope.go:117] "RemoveContainer" containerID="0d2a33307c508ac7ec19764558e0c1c55cbf232d5c119fd57dd9bb809242bafa" Jan 27 19:33:04 crc kubenswrapper[4907]: I0127 19:33:04.357919 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:04 crc kubenswrapper[4907]: I0127 19:33:04.357978 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:09 crc kubenswrapper[4907]: I0127 19:33:09.750452 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:09 crc kubenswrapper[4907]: E0127 19:33:09.752331 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:22 crc kubenswrapper[4907]: I0127 19:33:22.748873 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:22 crc kubenswrapper[4907]: E0127 19:33:22.750185 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:24 crc kubenswrapper[4907]: I0127 19:33:24.363656 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:24 crc kubenswrapper[4907]: I0127 19:33:24.370314 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7f448b7857-l4vhw" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.355309 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.358415 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.369502 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.473945 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.474078 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.474166 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579151 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579274 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.579325 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.580689 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.581187 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.602090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"community-operators-9jhvg\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:29 crc kubenswrapper[4907]: I0127 19:33:29.688463 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:30 crc kubenswrapper[4907]: I0127 19:33:30.344913 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.114974 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" exitCode=0 Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.115051 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde"} Jan 27 19:33:31 crc kubenswrapper[4907]: I0127 19:33:31.115272 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"6a2aecfd10f37bbdb917506ea88c69939cf002c52a320be18eb04643f8f8eece"} Jan 27 19:33:33 crc kubenswrapper[4907]: I0127 19:33:33.140488 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} Jan 27 19:33:34 crc kubenswrapper[4907]: I0127 19:33:34.153391 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" exitCode=0 Jan 27 19:33:34 crc kubenswrapper[4907]: I0127 19:33:34.153485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} Jan 27 19:33:35 crc kubenswrapper[4907]: I0127 19:33:35.167058 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerStarted","Data":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} Jan 27 19:33:35 crc kubenswrapper[4907]: I0127 19:33:35.191185 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jhvg" podStartSLOduration=2.662723274 podStartE2EDuration="6.191166953s" podCreationTimestamp="2026-01-27 19:33:29 +0000 UTC" firstStartedPulling="2026-01-27 19:33:31.117921709 +0000 UTC m=+5266.247204341" lastFinishedPulling="2026-01-27 19:33:34.646365388 +0000 UTC m=+5269.775648020" observedRunningTime="2026-01-27 19:33:35.186206253 +0000 UTC m=+5270.315488875" watchObservedRunningTime="2026-01-27 19:33:35.191166953 +0000 UTC m=+5270.320449565" Jan 27 19:33:37 crc kubenswrapper[4907]: I0127 19:33:37.748137 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:37 crc kubenswrapper[4907]: E0127 19:33:37.748857 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.688878 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.689522 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:39 crc kubenswrapper[4907]: I0127 19:33:39.741408 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:40 crc kubenswrapper[4907]: I0127 19:33:40.501689 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:40 crc kubenswrapper[4907]: I0127 19:33:40.560260 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.243523 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jhvg" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" containerID="cri-o://8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" gracePeriod=2 Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.849488 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974353 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974660 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.974723 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") pod \"c9fc7a46-9732-43c0-af69-22c598778530\" (UID: \"c9fc7a46-9732-43c0-af69-22c598778530\") " Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.975075 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities" (OuterVolumeSpecName: "utilities") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.975519 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:42 crc kubenswrapper[4907]: I0127 19:33:42.979840 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8" (OuterVolumeSpecName: "kube-api-access-vknz8") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "kube-api-access-vknz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.022274 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9fc7a46-9732-43c0-af69-22c598778530" (UID: "c9fc7a46-9732-43c0-af69-22c598778530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.077174 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fc7a46-9732-43c0-af69-22c598778530-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.077206 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vknz8\" (UniqueName: \"kubernetes.io/projected/c9fc7a46-9732-43c0-af69-22c598778530-kube-api-access-vknz8\") on node \"crc\" DevicePath \"\"" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255573 4907 generic.go:334] "Generic (PLEG): container finished" podID="c9fc7a46-9732-43c0-af69-22c598778530" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" exitCode=0 Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255616 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255634 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jhvg" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255657 4907 scope.go:117] "RemoveContainer" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.255645 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jhvg" event={"ID":"c9fc7a46-9732-43c0-af69-22c598778530","Type":"ContainerDied","Data":"6a2aecfd10f37bbdb917506ea88c69939cf002c52a320be18eb04643f8f8eece"} Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.278163 4907 scope.go:117] "RemoveContainer" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.299697 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.306125 4907 scope.go:117] "RemoveContainer" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.317158 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jhvg"] Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.355565 4907 scope.go:117] "RemoveContainer" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.358995 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": container with ID starting with 8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132 not found: ID does not exist" containerID="8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.359058 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132"} err="failed to get container status \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": rpc error: code = NotFound desc = could not find container \"8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132\": container with ID starting with 8c3586d33af6a1469b998491c9d861bae80bd6a8b72ab3c42d1bb57b266c0132 not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.359090 4907 scope.go:117] "RemoveContainer" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.362488 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": container with ID starting with ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd not found: ID does not exist" containerID="ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.362647 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd"} err="failed to get container status \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": rpc error: code = NotFound desc = could not find container \"ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd\": container with ID starting with ab4b5e82466d3713072215c278bc3bd9de9b189b8150a7b5dc1f57a2b65672dd not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.362758 4907 scope.go:117] "RemoveContainer" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: E0127 19:33:43.363680 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": container with ID starting with 887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde not found: ID does not exist" containerID="887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.363808 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde"} err="failed to get container status \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": rpc error: code = NotFound desc = could not find container \"887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde\": container with ID starting with 887e7f975f50c66a809f5e183eaba2b08bd65c458d4bfec3135f6e40da92dfde not found: ID does not exist" Jan 27 19:33:43 crc kubenswrapper[4907]: I0127 19:33:43.770417 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fc7a46-9732-43c0-af69-22c598778530" path="/var/lib/kubelet/pods/c9fc7a46-9732-43c0-af69-22c598778530/volumes" Jan 27 19:33:52 crc kubenswrapper[4907]: I0127 19:33:52.748135 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:33:52 crc kubenswrapper[4907]: E0127 19:33:52.749132 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:06 crc kubenswrapper[4907]: I0127 19:34:06.748855 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:06 crc kubenswrapper[4907]: E0127 19:34:06.751922 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:19 crc kubenswrapper[4907]: I0127 19:34:19.747835 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:19 crc kubenswrapper[4907]: E0127 19:34:19.748841 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.211475 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220386 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-content" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220404 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-content" Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220436 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-utilities" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220442 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="extract-utilities" Jan 27 19:34:30 crc kubenswrapper[4907]: E0127 19:34:30.220458 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220466 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.220679 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fc7a46-9732-43c0-af69-22c598778530" containerName="registry-server" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.221957 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.226336 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cl48g"/"default-dockercfg-j8vh7" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.234179 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cl48g"/"kube-root-ca.crt" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.234519 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cl48g"/"openshift-service-ca.crt" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.247129 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.307191 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.307312 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.409409 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.409600 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.410150 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.435273 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"must-gather-s7n67\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:30 crc kubenswrapper[4907]: I0127 19:34:30.552690 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:34:31 crc kubenswrapper[4907]: I0127 19:34:31.075601 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:34:31 crc kubenswrapper[4907]: I0127 19:34:31.870850 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"8a942117ac3981a77accc1c44d9271b4892567e310d9c25d125f2987cf3afee3"} Jan 27 19:34:33 crc kubenswrapper[4907]: I0127 19:34:33.748791 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:33 crc kubenswrapper[4907]: E0127 19:34:33.749588 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.021874 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee"} Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.022481 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerStarted","Data":"7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538"} Jan 27 19:34:44 crc kubenswrapper[4907]: I0127 19:34:44.039417 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cl48g/must-gather-s7n67" podStartSLOduration=1.68298162 podStartE2EDuration="14.039395735s" podCreationTimestamp="2026-01-27 19:34:30 +0000 UTC" firstStartedPulling="2026-01-27 19:34:31.080039071 +0000 UTC m=+5326.209321683" lastFinishedPulling="2026-01-27 19:34:43.436453186 +0000 UTC m=+5338.565735798" observedRunningTime="2026-01-27 19:34:44.035042322 +0000 UTC m=+5339.164324934" watchObservedRunningTime="2026-01-27 19:34:44.039395735 +0000 UTC m=+5339.168678347" Jan 27 19:34:45 crc kubenswrapper[4907]: I0127 19:34:45.760846 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:45 crc kubenswrapper[4907]: E0127 19:34:45.761634 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.558938 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.562922 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.584330 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.584649 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.687030 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.687515 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.688889 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.711225 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"crc-debug-vj7kk\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:51 crc kubenswrapper[4907]: I0127 19:34:51.890419 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:34:52 crc kubenswrapper[4907]: I0127 19:34:52.144661 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerStarted","Data":"4ab9bf9797d13dbe65626538685553d1d2e55e44420ed8b27f3d4d0d0f17578a"} Jan 27 19:34:58 crc kubenswrapper[4907]: I0127 19:34:58.748288 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:34:58 crc kubenswrapper[4907]: E0127 19:34:58.749247 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.253415 4907 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.256216 4907 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs8bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-vj7kk_openshift-must-gather-cl48g(1be94d37-98c3-436c-a243-ddf6745f4d7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.257511 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.428420 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.432089 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: E0127 19:35:08.437703 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.443155 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473528 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473684 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.473707 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575371 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.575526 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.576029 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.576038 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.606510 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"redhat-operators-jwr9q\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:08 crc kubenswrapper[4907]: I0127 19:35:08.769517 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:09 crc kubenswrapper[4907]: I0127 19:35:09.375576 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:35:09 crc kubenswrapper[4907]: I0127 19:35:09.450012 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"87d77272dc3b6d8210720fa55629434c59d66bd02a70f6738c235b56f4cea0dc"} Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.471159 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" exitCode=0 Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.471253 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052"} Jan 27 19:35:10 crc kubenswrapper[4907]: I0127 19:35:10.749877 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:10 crc kubenswrapper[4907]: E0127 19:35:10.750211 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:12 crc kubenswrapper[4907]: I0127 19:35:12.495026 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} Jan 27 19:35:18 crc kubenswrapper[4907]: I0127 19:35:18.576626 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" exitCode=0 Jan 27 19:35:18 crc kubenswrapper[4907]: I0127 19:35:18.577444 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} Jan 27 19:35:19 crc kubenswrapper[4907]: I0127 19:35:19.610988 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerStarted","Data":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} Jan 27 19:35:19 crc kubenswrapper[4907]: I0127 19:35:19.641943 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwr9q" podStartSLOduration=3.145876365 podStartE2EDuration="11.641926128s" podCreationTimestamp="2026-01-27 19:35:08 +0000 UTC" firstStartedPulling="2026-01-27 19:35:10.477967733 +0000 UTC m=+5365.607250345" lastFinishedPulling="2026-01-27 19:35:18.974017496 +0000 UTC m=+5374.103300108" observedRunningTime="2026-01-27 19:35:19.635134706 +0000 UTC m=+5374.764417318" watchObservedRunningTime="2026-01-27 19:35:19.641926128 +0000 UTC m=+5374.771208740" Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.678196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerStarted","Data":"4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23"} Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.703865 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" podStartSLOduration=2.094540754 podStartE2EDuration="34.703842438s" podCreationTimestamp="2026-01-27 19:34:51 +0000 UTC" firstStartedPulling="2026-01-27 19:34:51.962415098 +0000 UTC m=+5347.091697710" lastFinishedPulling="2026-01-27 19:35:24.571716782 +0000 UTC m=+5379.700999394" observedRunningTime="2026-01-27 19:35:25.696713796 +0000 UTC m=+5380.825996408" watchObservedRunningTime="2026-01-27 19:35:25.703842438 +0000 UTC m=+5380.833125050" Jan 27 19:35:25 crc kubenswrapper[4907]: I0127 19:35:25.757962 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:25 crc kubenswrapper[4907]: E0127 19:35:25.758275 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:28 crc kubenswrapper[4907]: I0127 19:35:28.770433 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:28 crc kubenswrapper[4907]: I0127 19:35:28.771178 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:35:29 crc kubenswrapper[4907]: I0127 19:35:29.849491 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:29 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:29 crc kubenswrapper[4907]: > Jan 27 19:35:38 crc kubenswrapper[4907]: I0127 19:35:38.748653 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:38 crc kubenswrapper[4907]: E0127 19:35:38.750019 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:39 crc kubenswrapper[4907]: I0127 19:35:39.835986 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:39 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:39 crc kubenswrapper[4907]: > Jan 27 19:35:49 crc kubenswrapper[4907]: I0127 19:35:49.838900 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:49 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:49 crc kubenswrapper[4907]: > Jan 27 19:35:52 crc kubenswrapper[4907]: I0127 19:35:52.748039 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:35:52 crc kubenswrapper[4907]: E0127 19:35:52.749043 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:35:59 crc kubenswrapper[4907]: I0127 19:35:59.828149 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:59 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:59 crc kubenswrapper[4907]: > Jan 27 19:36:07 crc kubenswrapper[4907]: I0127 19:36:07.748233 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.152196 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.830298 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:08 crc kubenswrapper[4907]: I0127 19:36:08.896609 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:09 crc kubenswrapper[4907]: I0127 19:36:09.615304 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:10 crc kubenswrapper[4907]: I0127 19:36:10.193263 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jwr9q" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" containerID="cri-o://be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" gracePeriod=2 Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.157305 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162739 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162793 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.162932 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") pod \"b3b5fc01-3f3e-4604-bec9-99128bef3139\" (UID: \"b3b5fc01-3f3e-4604-bec9-99128bef3139\") " Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.166718 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities" (OuterVolumeSpecName: "utilities") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.174089 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5" (OuterVolumeSpecName: "kube-api-access-kwch5") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "kube-api-access-kwch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239902 4907 generic.go:334] "Generic (PLEG): container finished" podID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" exitCode=0 Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239991 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwr9q" event={"ID":"b3b5fc01-3f3e-4604-bec9-99128bef3139","Type":"ContainerDied","Data":"87d77272dc3b6d8210720fa55629434c59d66bd02a70f6738c235b56f4cea0dc"} Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.240007 4907 scope.go:117] "RemoveContainer" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.239988 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwr9q" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.266217 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwch5\" (UniqueName: \"kubernetes.io/projected/b3b5fc01-3f3e-4604-bec9-99128bef3139-kube-api-access-kwch5\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.266256 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.279149 4907 scope.go:117] "RemoveContainer" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.312228 4907 scope.go:117] "RemoveContainer" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.317955 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b5fc01-3f3e-4604-bec9-99128bef3139" (UID: "b3b5fc01-3f3e-4604-bec9-99128bef3139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.369362 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b5fc01-3f3e-4604-bec9-99128bef3139-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.387372 4907 scope.go:117] "RemoveContainer" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.388063 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": container with ID starting with be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b not found: ID does not exist" containerID="be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.389122 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b"} err="failed to get container status \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": rpc error: code = NotFound desc = could not find container \"be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b\": container with ID starting with be130a8a074c6e6fce6d2963426bbd64b68f8166162ef5c60e5196e08bc0133b not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.389173 4907 scope.go:117] "RemoveContainer" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.390411 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": container with ID starting with 08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a not found: ID does not exist" containerID="08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390450 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a"} err="failed to get container status \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": rpc error: code = NotFound desc = could not find container \"08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a\": container with ID starting with 08b0ff0222cd46c7bd138ad46db5a2799d1b486dfafb44878b272634369a936a not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390465 4907 scope.go:117] "RemoveContainer" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: E0127 19:36:11.390866 4907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": container with ID starting with 8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052 not found: ID does not exist" containerID="8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.390938 4907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052"} err="failed to get container status \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": rpc error: code = NotFound desc = could not find container \"8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052\": container with ID starting with 8305cc1eaa193632c8ffb3dcf4e46f3bdb99f30a189e2e8a29989e8caf68d052 not found: ID does not exist" Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.582821 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.593169 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jwr9q"] Jan 27 19:36:11 crc kubenswrapper[4907]: I0127 19:36:11.773513 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" path="/var/lib/kubelet/pods/b3b5fc01-3f3e-4604-bec9-99128bef3139/volumes" Jan 27 19:36:17 crc kubenswrapper[4907]: I0127 19:36:17.325832 4907 generic.go:334] "Generic (PLEG): container finished" podID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerID="4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23" exitCode=0 Jan 27 19:36:17 crc kubenswrapper[4907]: I0127 19:36:17.325953 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" event={"ID":"1be94d37-98c3-436c-a243-ddf6745f4d7a","Type":"ContainerDied","Data":"4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23"} Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.493257 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.532532 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.545006 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-vj7kk"] Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677315 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") pod \"1be94d37-98c3-436c-a243-ddf6745f4d7a\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677539 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") pod \"1be94d37-98c3-436c-a243-ddf6745f4d7a\" (UID: \"1be94d37-98c3-436c-a243-ddf6745f4d7a\") " Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.677688 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host" (OuterVolumeSpecName: "host") pod "1be94d37-98c3-436c-a243-ddf6745f4d7a" (UID: "1be94d37-98c3-436c-a243-ddf6745f4d7a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.679647 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1be94d37-98c3-436c-a243-ddf6745f4d7a-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.684655 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj" (OuterVolumeSpecName: "kube-api-access-cs8bj") pod "1be94d37-98c3-436c-a243-ddf6745f4d7a" (UID: "1be94d37-98c3-436c-a243-ddf6745f4d7a"). InnerVolumeSpecName "kube-api-access-cs8bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:18 crc kubenswrapper[4907]: I0127 19:36:18.803215 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs8bj\" (UniqueName: \"kubernetes.io/projected/1be94d37-98c3-436c-a243-ddf6745f4d7a-kube-api-access-cs8bj\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.346759 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab9bf9797d13dbe65626538685553d1d2e55e44420ed8b27f3d4d0d0f17578a" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.346886 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-vj7kk" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.761697 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" path="/var/lib/kubelet/pods/1be94d37-98c3-436c-a243-ddf6745f4d7a/volumes" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.816716 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817254 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-content" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817272 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-content" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817306 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-utilities" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817315 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="extract-utilities" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817337 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817346 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: E0127 19:36:19.817370 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817378 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817806 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be94d37-98c3-436c-a243-ddf6745f4d7a" containerName="container-00" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.817841 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b5fc01-3f3e-4604-bec9-99128bef3139" containerName="registry-server" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.818642 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.929866 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:19 crc kubenswrapper[4907]: I0127 19:36:19.930854 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032546 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032741 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.032929 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.256488 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"crc-debug-xqbdj\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:20 crc kubenswrapper[4907]: I0127 19:36:20.438387 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.371714 4907 generic.go:334] "Generic (PLEG): container finished" podID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerID="7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c" exitCode=0 Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.371809 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" event={"ID":"6dd3c651-908f-45ce-ac85-5fa959304ab4","Type":"ContainerDied","Data":"7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c"} Jan 27 19:36:21 crc kubenswrapper[4907]: I0127 19:36:21.372327 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" event={"ID":"6dd3c651-908f-45ce-ac85-5fa959304ab4","Type":"ContainerStarted","Data":"939824f31d588a04ef1a1007458f4cc9252646915f408a5fd45ad47e95372388"} Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.517362 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590286 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") pod \"6dd3c651-908f-45ce-ac85-5fa959304ab4\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590748 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") pod \"6dd3c651-908f-45ce-ac85-5fa959304ab4\" (UID: \"6dd3c651-908f-45ce-ac85-5fa959304ab4\") " Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.590344 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host" (OuterVolumeSpecName: "host") pod "6dd3c651-908f-45ce-ac85-5fa959304ab4" (UID: "6dd3c651-908f-45ce-ac85-5fa959304ab4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.591744 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dd3c651-908f-45ce-ac85-5fa959304ab4-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.599457 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w" (OuterVolumeSpecName: "kube-api-access-d7x9w") pod "6dd3c651-908f-45ce-ac85-5fa959304ab4" (UID: "6dd3c651-908f-45ce-ac85-5fa959304ab4"). InnerVolumeSpecName "kube-api-access-d7x9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:22 crc kubenswrapper[4907]: I0127 19:36:22.694037 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7x9w\" (UniqueName: \"kubernetes.io/projected/6dd3c651-908f-45ce-ac85-5fa959304ab4-kube-api-access-d7x9w\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.302658 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.314009 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-xqbdj"] Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.393081 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939824f31d588a04ef1a1007458f4cc9252646915f408a5fd45ad47e95372388" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.393406 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-xqbdj" Jan 27 19:36:23 crc kubenswrapper[4907]: I0127 19:36:23.760886 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" path="/var/lib/kubelet/pods/6dd3c651-908f-45ce-ac85-5fa959304ab4/volumes" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.495426 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:24 crc kubenswrapper[4907]: E0127 19:36:24.496481 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.496507 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.496847 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd3c651-908f-45ce-ac85-5fa959304ab4" containerName="container-00" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.497999 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.636157 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.636396 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.738944 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.739061 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.739138 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.757296 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"crc-debug-ntbgs\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:24 crc kubenswrapper[4907]: I0127 19:36:24.823938 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.414639 4907 generic.go:334] "Generic (PLEG): container finished" podID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerID="6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a" exitCode=0 Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.414748 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" event={"ID":"518a242e-cb81-4a13-ab33-41ce20f654ae","Type":"ContainerDied","Data":"6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a"} Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.415038 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" event={"ID":"518a242e-cb81-4a13-ab33-41ce20f654ae","Type":"ContainerStarted","Data":"c80417f25860d94c303ce7e8b5a07dab905dece484c79d6f2b70fdd30d8969cd"} Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.457663 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:25 crc kubenswrapper[4907]: I0127 19:36:25.472305 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/crc-debug-ntbgs"] Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.569217 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.693795 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") pod \"518a242e-cb81-4a13-ab33-41ce20f654ae\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.693881 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") pod \"518a242e-cb81-4a13-ab33-41ce20f654ae\" (UID: \"518a242e-cb81-4a13-ab33-41ce20f654ae\") " Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.695424 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host" (OuterVolumeSpecName: "host") pod "518a242e-cb81-4a13-ab33-41ce20f654ae" (UID: "518a242e-cb81-4a13-ab33-41ce20f654ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.705041 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl" (OuterVolumeSpecName: "kube-api-access-fgngl") pod "518a242e-cb81-4a13-ab33-41ce20f654ae" (UID: "518a242e-cb81-4a13-ab33-41ce20f654ae"). InnerVolumeSpecName "kube-api-access-fgngl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.797162 4907 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/518a242e-cb81-4a13-ab33-41ce20f654ae-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:26 crc kubenswrapper[4907]: I0127 19:36:26.797205 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgngl\" (UniqueName: \"kubernetes.io/projected/518a242e-cb81-4a13-ab33-41ce20f654ae-kube-api-access-fgngl\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.438801 4907 scope.go:117] "RemoveContainer" containerID="6804d0110d95eee1f118a565f667cb8f164ca28b6ea84699a03b0ccca3037f3a" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.438850 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/crc-debug-ntbgs" Jan 27 19:36:27 crc kubenswrapper[4907]: I0127 19:36:27.763153 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" path="/var/lib/kubelet/pods/518a242e-cb81-4a13-ab33-41ce20f654ae/volumes" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.241774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-api/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.458600 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-listener/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.534313 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-evaluator/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.544199 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_15bed332-56fa-45cd-8ab4-5d4cced0e671/aodh-notifier/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.720422 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7bdc78db-g6vvs_eb7e48e3-f92d-4ee4-9074-9e035a54c8dc/barbican-api-log/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.759519 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7bdc78db-g6vvs_eb7e48e3-f92d-4ee4-9074-9e035a54c8dc/barbican-api/0.log" Jan 27 19:36:53 crc kubenswrapper[4907]: I0127 19:36:53.894674 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f9fb848c6-w9s7n_06cb3a1d-b998-43fe-8939-29cd2c3fd31f/barbican-keystone-listener/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.036261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-cc6c576c9-l5q6m_72844033-17b7-4a8b-973d-f8ef443cd529/barbican-worker/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.060708 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f9fb848c6-w9s7n_06cb3a1d-b998-43fe-8939-29cd2c3fd31f/barbican-keystone-listener-log/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.195933 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-cc6c576c9-l5q6m_72844033-17b7-4a8b-973d-f8ef443cd529/barbican-worker-log/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.310895 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9jdgj_172533fc-3de0-4a67-91d4-d54dbbf6e0e8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:54 crc kubenswrapper[4907]: I0127 19:36:54.619793 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-central-agent/1.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.156447 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-notification-agent/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.189216 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/proxy-httpd/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.199588 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/sg-core/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.215189 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8cc0b779-ca13-49be-91c1-ea2eb4a99d9c/ceilometer-central-agent/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.397228 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f62bd754-7667-406a-9883-2015ddcc3f16/cinder-api-log/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.495777 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f62bd754-7667-406a-9883-2015ddcc3f16/cinder-api/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.577624 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/cinder-scheduler/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.634528 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/cinder-scheduler/1.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.707433 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_621bccf6-c3e9-4b2d-821b-217848191c27/probe/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.805831 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gnrr5_0aabc401-314e-438d-920e-1f984949944c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:55 crc kubenswrapper[4907]: I0127 19:36:55.939985 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kmjbn_b8f3066f-ed2e-42b5-94ff-e989771dbe8e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.037348 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/init/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.334727 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/dnsmasq-dns/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.339051 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qwq5j_ad792b6c-ce47-4ef4-964c-e91423a94f1b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:56 crc kubenswrapper[4907]: I0127 19:36:56.343448 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hhml4_5cf7b3c3-995f-48f8-a74f-3ffaf08f6d1e/init/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.110458 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34586e59-e405-4871-9eb7-6ec0251bc992/glance-httpd/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.141572 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34586e59-e405-4871-9eb7-6ec0251bc992/glance-log/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.363374 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_79b7035b-7e7c-40e4-86a8-d1499df47d5f/glance-httpd/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.418540 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_79b7035b-7e7c-40e4-86a8-d1499df47d5f/glance-log/0.log" Jan 27 19:36:57 crc kubenswrapper[4907]: I0127 19:36:57.750940 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7b8679c4d-pw2cq_14d9243a-0abc-40ce-9881-eef907bdafe3/heat-api/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.135261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-h47vc_4cadb1da-1dd2-49ac-a171-c672c006bfa8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.162824 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-668f78b-db9cs_0d2540a9-525b-46c6-b0ae-23e163484c98/heat-engine/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.358921 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-96749fcd4-hh92n_effdf66a-d041-45e1-a1f0-bd1367a2d80a/heat-cfnapi/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.397513 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fhxmq_daa3c495-5c9e-45cf-b66a-c452e54e9c06/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:58 crc kubenswrapper[4907]: I0127 19:36:58.731287 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492341-l97qr_6e412045-8e45-4718-98e5-17e76c69623a/keystone-cron/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.037653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_edbdf1e9-d0d7-458d-8f5a-891ee37d7483/kube-state-metrics/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.136343 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kxr54_a1ab6c99-0bb2-45ca-9dc8-1d6da396d011/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.260189 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-k5dbc_cd8ce37e-984e-48a7-afcf-98798042a1c4/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.303304 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-84847858bd-jp29w_345bd96a-9890-4264-886f-edccc999706b/keystone-api/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.576999 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_611de5af-e33a-4aca-88c7-201f7c0e6cf9/mysqld-exporter/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.914905 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-89rhc_30518ac3-ca77-4963-8ab9-1f0dd9c596eb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:36:59 crc kubenswrapper[4907]: I0127 19:36:59.996570 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6c685b5-88m65_eb34862c-021c-4e5e-b4c0-ceffb9222438/neutron-httpd/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.146406 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6c685b5-88m65_eb34862c-021c-4e5e-b4c0-ceffb9222438/neutron-api/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.679065 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aafbf219-964f-4436-964e-7ad85e0eb56b/nova-api-log/0.log" Jan 27 19:37:00 crc kubenswrapper[4907]: I0127 19:37:00.726110 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7a7fd860-ac95-4571-99c5-b416f9a9bae9/nova-cell0-conductor-conductor/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.122536 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aafbf219-964f-4436-964e-7ad85e0eb56b/nova-api-api/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.141948 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4d13569d-0cc7-4ce3-ae16-b72ef4ea170c/nova-cell1-conductor-conductor/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.142355 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3257c75e-f45f-4166-b7ba-66c1990ac2dc/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.414266 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7/nova-metadata-log/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.428945 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cjhx8_c3ad9414-0787-40c9-a907-d59ec160f1dd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.917270 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cfde5407-28a8-4f75-8a72-3ff5a7d5fa8a/nova-scheduler-scheduler/0.log" Jan 27 19:37:01 crc kubenswrapper[4907]: I0127 19:37:01.988739 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/mysql-bootstrap/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.173314 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/galera/1.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.182829 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/mysql-bootstrap/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.241898 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0b24ac54-7ca4-4b1a-b26c-41ce82025599/galera/0.log" Jan 27 19:37:02 crc kubenswrapper[4907]: I0127 19:37:02.532521 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/mysql-bootstrap/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.045495 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/galera/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.056437 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/galera/1.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.097928 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e57d2b03-9116-4a79-bfc2-5b802cf62910/mysql-bootstrap/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.274342 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8cea1342-da85-42e5-a54b-98b132f7871f/openstackclient/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.577965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96prz_daaea3c0-a88d-442f-be06-bb95b2825fcc/ovn-controller/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.656660 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jxkhc_af6ab393-1e13-4683-81ae-6e28d9261d30/openstack-network-exporter/0.log" Jan 27 19:37:03 crc kubenswrapper[4907]: I0127 19:37:03.848112 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server-init/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.025341 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e3cb7a2-d4f9-43bf-a1e5-6486f796f9a7/nova-metadata-metadata/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.041774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server-init/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.046238 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovs-vswitchd/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.174812 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2q6jk_89e5e512-03ab-41c7-8cde-1e20d1f72d0d/ovsdb-server/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.424011 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c4f5ec64-0863-45ef-9090-4768ecd34667/openstack-network-exporter/0.log" Jan 27 19:37:04 crc kubenswrapper[4907]: I0127 19:37:04.466058 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ldwl4_1c6e62b9-2bac-4345-8a1c-1fe43ac9d1e7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.179174 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c4f5ec64-0863-45ef-9090-4768ecd34667/ovn-northd/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.180756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_32811f4d-c205-437d-a06c-ac4fff30cead/openstack-network-exporter/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.253275 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_32811f4d-c205-437d-a06c-ac4fff30cead/ovsdbserver-nb/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.456345 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7185e8ed-9479-43cc-814b-cfcd26e548a5/openstack-network-exporter/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.499258 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7185e8ed-9479-43cc-814b-cfcd26e548a5/ovsdbserver-sb/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.884031 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/init-config-reloader/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.885844 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bb5448674-jfs9k_4038dea7-e4ef-436d-baf3-47f8757e3bc0/placement-log/0.log" Jan 27 19:37:05 crc kubenswrapper[4907]: I0127 19:37:05.919006 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bb5448674-jfs9k_4038dea7-e4ef-436d-baf3-47f8757e3bc0/placement-api/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.071293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/init-config-reloader/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.115739 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/prometheus/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.138226 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/config-reloader/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.173605 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c9228204-5d32-47ea-9236-8ae3e4d5eebc/thanos-sidecar/0.log" Jan 27 19:37:06 crc kubenswrapper[4907]: I0127 19:37:06.344311 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.024983 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.037918 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_021272d4-b660-4c16-b9a6-befd84abe2cc/rabbitmq/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.038142 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.348653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.432648 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.489681 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be9e879-df48-4aea-9f07-b297cabca4f3/rabbitmq/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.616697 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.770585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/setup-container/0.log" Jan 27 19:37:07 crc kubenswrapper[4907]: I0127 19:37:07.785293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0e0246bb-5533-495d-849f-617b346c8fde/rabbitmq/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.098297 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/setup-container/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.209689 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dq6fb_9dcf4e25-6609-484b-98b6-a7c96c0a2c4a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.236500 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_5f8e936e-82a6-49cc-bb09-d247a2d0e47b/rabbitmq/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.501568 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tgbss_2872f844-3f1a-4d9b-8f96-5cc01d0cae12/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.632788 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bfmn4_de193c6b-eba4-4eb3-95c4-0d7fe875691f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:08 crc kubenswrapper[4907]: I0127 19:37:08.835261 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-56b44_ff08f4dc-f4e3-4e83-b922-32b6296fbee0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.025776 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vlfg7_71334cb5-9354-4f68-91bf-8631e5fa045a/ssh-known-hosts-edpm-deployment/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.222370 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d47577fc9-fz5kg_bfb5201d-eb44-42cb-a5ab-49520cc1e741/proxy-server/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.359593 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-m9rr7_a5ce2510-00de-4a5b-8d9d-578b21229c8c/swift-ring-rebalance/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.366155 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d47577fc9-fz5kg_bfb5201d-eb44-42cb-a5ab-49520cc1e741/proxy-httpd/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.601570 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-auditor/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.612965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-reaper/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.637340 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-replicator/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.804793 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-auditor/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.807332 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/account-server/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.921213 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-replicator/0.log" Jan 27 19:37:09 crc kubenswrapper[4907]: I0127 19:37:09.964317 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-server/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.308263 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-auditor/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.317382 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/container-updater/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.425447 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-expirer/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.524198 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-replicator/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.722927 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-server/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.741453 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/object-updater/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.795438 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/rsync/0.log" Jan 27 19:37:10 crc kubenswrapper[4907]: I0127 19:37:10.998595 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7e986b-1dca-4795-85f7-e62cdd92d995/swift-recon-cron/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.162535 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-98h4r_fbb41855-75d9-4678-8e5c-7602c99dbf1c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.305682 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-9bcfx_36c00f4a-e4e0-472b-a51c-510d44296cf8/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.600654 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9364bcb6-d99e-42e9-9f1a-58054d2a59ab/test-operator-logs-container/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.608767 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_019838dd-5c5f-40f0-a169-09156549d64c/tempest-tests-tempest-tests-runner/0.log" Jan 27 19:37:11 crc kubenswrapper[4907]: I0127 19:37:11.800113 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-n4qk2_907876b3-4761-4612-9c26-3479222c6b72/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:37:13 crc kubenswrapper[4907]: I0127 19:37:13.765776 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_407bf5df-e69a-49ae-ac93-858be78d98a0/memcached/0.log" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.328092 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:40 crc kubenswrapper[4907]: E0127 19:37:40.329308 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.329326 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.329642 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="518a242e-cb81-4a13-ab33-41ce20f654ae" containerName="container-00" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.334455 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.344694 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454515 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454733 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.454894 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557187 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557411 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557510 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557820 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:40 crc kubenswrapper[4907]: I0127 19:37:40.557915 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.154647 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"redhat-marketplace-5wl5q\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.265826 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:41 crc kubenswrapper[4907]: I0127 19:37:41.928279 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335010 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe" exitCode=0 Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335226 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe"} Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.335357 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc"} Jan 27 19:37:42 crc kubenswrapper[4907]: I0127 19:37:42.338337 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.356999 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9"} Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.759896 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.964210 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:44 crc kubenswrapper[4907]: I0127 19:37:44.985815 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.026088 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.254610 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/extract/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.262438 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/util/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.296615 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_684ad6b0dce71de153299cead16af12f35e281dd8f36fc74b54749203djcnbd_31e27b41-8fcc-441c-a1cd-0cfedddea164/pull/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.549525 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-nznnn_018e0dfe-5282-40d5-87db-8551645d6e02/manager/1.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.600495 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-nznnn_018e0dfe-5282-40d5-87db-8551645d6e02/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.625544 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-8jsvt_e6378a4c-96e5-4151-a0ca-c320fa9b667d/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.819876 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6lprh_277579e8-58c3-4ad7-b902-e62f045ba8c6/manager/0.log" Jan 27 19:37:45 crc kubenswrapper[4907]: I0127 19:37:45.947655 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-7hgqc_a05cfe48-4bf5-4199-aefa-de59259798c4/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.122442 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-7hgqc_a05cfe48-4bf5-4199-aefa-de59259798c4/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.308144 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-4nlx7_e9f20d2f-16bf-49df-9c41-6fd6faa6ef67/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.377186 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9" exitCode=0 Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.377226 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9"} Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.430009 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-b29cj_f1ed42c6-98ac-41b8-96df-24919c0f9837/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.503607 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-b29cj_f1ed42c6-98ac-41b8-96df-24919c0f9837/manager/0.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.815965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-hb2q7_c4a64f11-d6ef-487e-afa3-1d9bdbea9424/manager/1.log" Jan 27 19:37:46 crc kubenswrapper[4907]: I0127 19:37:46.890699 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-mrpqf_7c6ac148-bc7a-4480-9155-8f78567a5070/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.077616 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-hb2q7_c4a64f11-d6ef-487e-afa3-1d9bdbea9424/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.139582 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-kjhgn_e257f81e-9460-4391-a7a5-cca3fc9230d9/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.306010 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mst5f_bc6ebe7e-320a-4193-8db4-3d4574ba1c3b/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.394401 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerStarted","Data":"2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a"} Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.418692 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wl5q" podStartSLOduration=2.920042811 podStartE2EDuration="7.418671228s" podCreationTimestamp="2026-01-27 19:37:40 +0000 UTC" firstStartedPulling="2026-01-27 19:37:42.337135903 +0000 UTC m=+5517.466418515" lastFinishedPulling="2026-01-27 19:37:46.835764309 +0000 UTC m=+5521.965046932" observedRunningTime="2026-01-27 19:37:47.411836595 +0000 UTC m=+5522.541119207" watchObservedRunningTime="2026-01-27 19:37:47.418671228 +0000 UTC m=+5522.547953840" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.495738 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.558396 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mst5f_bc6ebe7e-320a-4193-8db4-3d4574ba1c3b/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.732196 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-9t69q_f5936608-3de1-4f9e-b2dc-ae8a1b4cf72b/manager/0.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.757955 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-l2pdl_774ac09a-4164-4e22-9ea2-385ac4ef87eb/manager/1.log" Jan 27 19:37:47 crc kubenswrapper[4907]: I0127 19:37:47.902800 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-l2pdl_774ac09a-4164-4e22-9ea2-385ac4ef87eb/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.114521 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fnh99_bd2d065d-dd6e-43bc-a725-e7fe52c024b1/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.179025 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-fnh99_bd2d065d-dd6e-43bc-a725-e7fe52c024b1/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.341988 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-tn4d6_a733096f-e99d-4186-8542-1d8cb16012d2/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.475577 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-tn4d6_a733096f-e99d-4186-8542-1d8cb16012d2/manager/0.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.539936 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9_8a6e2a40-e233-4dbe-9b63-0fecf3fc1487/manager/1.log" Jan 27 19:37:48 crc kubenswrapper[4907]: I0127 19:37:48.545148 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854rpqq9_8a6e2a40-e233-4dbe-9b63-0fecf3fc1487/manager/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.148412 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c754559d6-wt8dc_f22de95d-f437-432c-917a-a08c082e02c4/operator/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.219424 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c754559d6-wt8dc_f22de95d-f437-432c-917a-a08c082e02c4/operator/1.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.540862 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xc2fp_0a849662-db42-42f0-9317-eb3714b775d0/registry-server/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.696443 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-bf27l_a8e8fa01-e75c-41bc-bfbb-affea0fcc0a2/manager/0.log" Jan 27 19:37:49 crc kubenswrapper[4907]: I0127 19:37:49.825057 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-mpgzf_f84f4e53-c1de-49a3-8435-5e4999a034fd/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.015791 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gfl97_a4aa00b3-8a54-4f84-907d-34a73b93944f/operator/1.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.106256 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gfl97_a4aa00b3-8a54-4f84-907d-34a73b93944f/operator/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.305582 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-fljbt_24caa967-ac26-4666-bf41-e2c4bc6ebb0f/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.551997 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ph8fw_7f5a8eee-f06b-4376-90d6-ff3faef0e8af/manager/1.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.644540 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ph8fw_7f5a8eee-f06b-4376-90d6-ff3faef0e8af/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.683552 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f954ddc5b-fjchc_7707f450-bf8d-4e84-9baa-a02bc80a0b22/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.755012 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7567458d64-vvlhm_12b8e76f-853f-4eeb-b6c5-e77d05bec357/manager/0.log" Jan 27 19:37:50 crc kubenswrapper[4907]: I0127 19:37:50.861578 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-wvnrt_ba33cbc9-9a56-4c45-8c07-19b4110e03c3/manager/0.log" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.265980 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.266025 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:37:51 crc kubenswrapper[4907]: I0127 19:37:51.325472 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.328478 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.387308 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:01 crc kubenswrapper[4907]: I0127 19:38:01.643831 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wl5q" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" containerID="cri-o://2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" gracePeriod=2 Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.659785 4907 generic.go:334] "Generic (PLEG): container finished" podID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerID="2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" exitCode=0 Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.659873 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a"} Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.660393 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wl5q" event={"ID":"c39e116a-bdc4-4a6a-94dd-6e1814c9532d","Type":"ContainerDied","Data":"0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc"} Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.660410 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efc126b62a887aba832cb2392ed15dc7df4335e49f9a652e9b5ff1291105dbc" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.665764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.767895 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.768222 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.768275 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") pod \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\" (UID: \"c39e116a-bdc4-4a6a-94dd-6e1814c9532d\") " Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.773880 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities" (OuterVolumeSpecName: "utilities") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.788843 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr" (OuterVolumeSpecName: "kube-api-access-thppr") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "kube-api-access-thppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.815162 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c39e116a-bdc4-4a6a-94dd-6e1814c9532d" (UID: "c39e116a-bdc4-4a6a-94dd-6e1814c9532d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.872780 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thppr\" (UniqueName: \"kubernetes.io/projected/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-kube-api-access-thppr\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.873070 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:02 crc kubenswrapper[4907]: I0127 19:38:02.873082 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39e116a-bdc4-4a6a-94dd-6e1814c9532d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.671612 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wl5q" Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.716048 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.734099 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wl5q"] Jan 27 19:38:03 crc kubenswrapper[4907]: I0127 19:38:03.773476 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" path="/var/lib/kubelet/pods/c39e116a-bdc4-4a6a-94dd-6e1814c9532d/volumes" Jan 27 19:38:14 crc kubenswrapper[4907]: I0127 19:38:14.123766 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7v8cj_42d77196-c327-47c3-8713-d23038a08e13/control-plane-machine-set-operator/0.log" Jan 27 19:38:15 crc kubenswrapper[4907]: I0127 19:38:15.120364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-znwrp_b3e7e0e7-2f37-4998-af7c-6e5d373a1264/machine-api-operator/0.log" Jan 27 19:38:15 crc kubenswrapper[4907]: I0127 19:38:15.141456 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-znwrp_b3e7e0e7-2f37-4998-af7c-6e5d373a1264/kube-rbac-proxy/0.log" Jan 27 19:38:26 crc kubenswrapper[4907]: I0127 19:38:26.520946 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:38:26 crc kubenswrapper[4907]: I0127 19:38:26.521653 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:38:28 crc kubenswrapper[4907]: I0127 19:38:28.604643 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jslkq_1fa35228-e301-48b5-b17b-21694e61ef16/cert-manager-controller/0.log" Jan 27 19:38:29 crc kubenswrapper[4907]: I0127 19:38:29.619036 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-58hmb_19be711f-36d9-46ae-8f7a-fdba490484da/cert-manager-cainjector/0.log" Jan 27 19:38:29 crc kubenswrapper[4907]: I0127 19:38:29.643367 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jfhbt_53565dd2-5a29-4ba0-9654-36b9600f765b/cert-manager-webhook/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.586096 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-rhr2w_d3336bb0-ef0d-47f3-b3c7-de266154f20e/nmstate-console-plugin/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.773179 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wz5df_0b5adf10-ea9c-48b5-bece-3ee8683423e3/nmstate-handler/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.825293 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-f7vbh_eeb93cd2-3631-4fad-a0d1-01232bbf9202/kube-rbac-proxy/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.888294 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-f7vbh_eeb93cd2-3631-4fad-a0d1-01232bbf9202/nmstate-metrics/0.log" Jan 27 19:38:42 crc kubenswrapper[4907]: I0127 19:38:42.995197 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-j277h_a9bfdf1d-7169-4990-bc4b-0a4b96f5ff0b/nmstate-operator/0.log" Jan 27 19:38:43 crc kubenswrapper[4907]: I0127 19:38:43.147259 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5q5h2_c53f2859-15de-4c57-81ba-539c7787b649/nmstate-webhook/0.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.424508 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/kube-rbac-proxy/0.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.432712 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/1.log" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.521528 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.521602 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:38:56 crc kubenswrapper[4907]: I0127 19:38:56.637834 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/0.log" Jan 27 19:39:12 crc kubenswrapper[4907]: I0127 19:39:12.927908 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s_91eb4541-31f7-488a-ae31-d57bfa265442/prometheus-operator-admission-webhook/0.log" Jan 27 19:39:12 crc kubenswrapper[4907]: I0127 19:39:12.977410 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k7sff_d68ab367-2841-460c-b666-5b52ec455dd2/prometheus-operator/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.186285 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh_c1a068f6-1c40-4947-b9bd-3b018ddcb25b/prometheus-operator-admission-webhook/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.253907 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/1.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.462284 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.504819 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-s824m_6ccb4875-977f-4fea-b3fa-8a4e4ba5a874/observability-ui-dashboards/0.log" Jan 27 19:39:13 crc kubenswrapper[4907]: I0127 19:39:13.683774 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-65v8r_99183c02-34c0-4a91-9e6e-0efd5d2a7a42/perses-operator/0.log" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521253 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521889 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.521956 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.523267 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:39:26 crc kubenswrapper[4907]: I0127 19:39:26.523362 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" gracePeriod=600 Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.611958 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" exitCode=0 Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612473 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f"} Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612500 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerStarted","Data":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} Jan 27 19:39:27 crc kubenswrapper[4907]: I0127 19:39:27.612517 4907 scope.go:117] "RemoveContainer" containerID="69d37b0f5534e49b9fdff8be2311d45a09b070bffc58694054af389798e2032b" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.239303 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-t7bh6_1f119aff-6ff6-4393-b7d5-19a981e50f3c/cluster-logging-operator/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.457468 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-2bmhz_e66fb20d-fb54-4964-9fb8-0ca14b94f895/collector/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.500796 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_2448dad5-d0f7-4335-a3fb-a23c5ef59bbf/loki-compactor/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.670851 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-zhq64_bd9b9d3c-ee96-4eb0-9b0a-5cfdc2241542/loki-distributor/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.724110 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-mwm5k_d57b015c-f3fc-424d-b910-96e63c6da31a/gateway/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.834982 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-mwm5k_d57b015c-f3fc-424d-b910-96e63c6da31a/opa/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.948755 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-njxl9_faf9da31-9bbb-43b4-9cc1-a80f95392ccf/opa/0.log" Jan 27 19:39:31 crc kubenswrapper[4907]: I0127 19:39:31.961807 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-795ff9d55b-njxl9_faf9da31-9bbb-43b4-9cc1-a80f95392ccf/gateway/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.151526 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_a9dc6389-0ad3-4259-aaf2-945493e66aa2/loki-index-gateway/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.194449 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_30b4b16e-4eff-46be-aac5-63d2b3d8fdf2/loki-ingester/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.383199 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-r2fdr_8f62d8a1-62d1-4206-b061-f75c44ff2450/loki-querier/0.log" Jan 27 19:39:32 crc kubenswrapper[4907]: I0127 19:39:32.444585 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-4ngf2_70874c1f-da0d-4389-8021-fd3003150fff/loki-query-frontend/0.log" Jan 27 19:39:48 crc kubenswrapper[4907]: I0127 19:39:48.098503 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zfszb_2ea123ce-4328-4379-8310-dbfff15acfbf/kube-rbac-proxy/0.log" Jan 27 19:39:48 crc kubenswrapper[4907]: I0127 19:39:48.253272 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zfszb_2ea123ce-4328-4379-8310-dbfff15acfbf/controller/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.200425 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.468573 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.484707 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.485589 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.530364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.734758 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.741579 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.784084 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:49 crc kubenswrapper[4907]: I0127 19:39:49.837756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.001936 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.010917 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-frr-files/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.074402 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/cp-metrics/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.089851 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/controller/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.253874 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr-metrics/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.306297 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/kube-rbac-proxy/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.357717 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr/1.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.564659 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/kube-rbac-proxy-frr/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.606314 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/reloader/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.796800 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-n9qqt_dd967d05-2ecd-4578-9c41-22e36ff088c1/frr-k8s-webhook-server/0.log" Jan 27 19:39:50 crc kubenswrapper[4907]: I0127 19:39:50.828731 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6858498495-rcqbh_9a776a10-0883-468e-a8d3-087ca6429b1b/manager/1.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.103896 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6858498495-rcqbh_9a776a10-0883-468e-a8d3-087ca6429b1b/manager/0.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.137473 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-548b7f8fd-7zpsk_202ff14a-3733-4ccf-8202-94fac75bdfc4/webhook-server/0.log" Jan 27 19:39:51 crc kubenswrapper[4907]: I0127 19:39:51.332708 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-597cv_aa958bdc-32c5-4e9f-841e-7427fdb87b31/kube-rbac-proxy/0.log" Jan 27 19:39:52 crc kubenswrapper[4907]: I0127 19:39:52.056583 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-597cv_aa958bdc-32c5-4e9f-841e-7427fdb87b31/speaker/0.log" Jan 27 19:39:52 crc kubenswrapper[4907]: I0127 19:39:52.142773 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-csdnr_3a1b45eb-7bdd-4172-99f0-b74eabce028d/frr/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.050943 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.376453 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.394450 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.398777 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.615846 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/util/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.665041 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/extract/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.669771 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2mdsjn_7584cc55-f71d-485d-aca5-31f66746f17a/pull/0.log" Jan 27 19:40:07 crc kubenswrapper[4907]: I0127 19:40:07.837229 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.642667 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.663300 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.675998 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.917869 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/pull/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.954653 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/util/0.log" Jan 27 19:40:08 crc kubenswrapper[4907]: I0127 19:40:08.977311 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchrnmj_bc3f86b6-0741-4ef9-9244-fc9378289ec2/extract/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.164113 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.407515 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.442364 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.465448 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.659652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/util/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.690824 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/extract/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.725022 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bhcnpt_3d9834e6-1e3d-42b3-90bf-204c9fa7bb68/pull/0.log" Jan 27 19:40:09 crc kubenswrapper[4907]: I0127 19:40:09.904506 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.089491 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.101928 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.131282 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.609771 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/util/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.668951 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/extract/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.670742 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vtpcq_9d2f9525-f0c4-4585-8162-0bce8fb139e9/pull/0.log" Jan 27 19:40:10 crc kubenswrapper[4907]: I0127 19:40:10.802901 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.043893 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.077397 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.077795 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.287149 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/extract/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.307756 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/util/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.334701 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086tppw_23fc61bd-6b09-47f7-b16a-b71c959bef3d/pull/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.409032 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.587235 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.589717 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.600617 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.797887 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-content/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.810343 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/extract-utilities/0.log" Jan 27 19:40:11 crc kubenswrapper[4907]: I0127 19:40:11.845268 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.165221 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.194601 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.226572 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.614745 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-utilities/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.641670 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/extract-content/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.664188 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vrcdt_8bc8a6bd-6efd-4f2d-89f5-0ceb2441efee/registry-server/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.840772 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-87z2b_5564598e-ff23-4f9e-b3de-64e127e94da6/marketplace-operator/0.log" Jan 27 19:40:12 crc kubenswrapper[4907]: I0127 19:40:12.873911 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.164743 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.198000 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.208308 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.487214 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.487360 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/extract-content/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.705966 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.874913 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wz7rn_1ec7dee3-a9ee-4bb8-b444-899c120854a7/registry-server/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.926167 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dhv2c_bae6221e-526b-4cc4-9f9b-1079238c9100/registry-server/0.log" Jan 27 19:40:13 crc kubenswrapper[4907]: I0127 19:40:13.960732 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.009185 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.024544 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.195045 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-utilities/0.log" Jan 27 19:40:14 crc kubenswrapper[4907]: I0127 19:40:14.231430 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/extract-content/0.log" Jan 27 19:40:15 crc kubenswrapper[4907]: I0127 19:40:15.178965 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dv4j2_fdf800ed-f5e8-4478-9e7a-98c7c95c7c52/registry-server/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.732807 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-78j5s_91eb4541-31f7-488a-ae31-d57bfa265442/prometheus-operator-admission-webhook/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.786962 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77b86cccc9-dptfh_c1a068f6-1c40-4947-b9bd-3b018ddcb25b/prometheus-operator-admission-webhook/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.798954 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k7sff_d68ab367-2841-460c-b666-5b52ec455dd2/prometheus-operator/0.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.973685 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/1.log" Jan 27 19:40:28 crc kubenswrapper[4907]: I0127 19:40:28.980904 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-s824m_6ccb4875-977f-4fea-b3fa-8a4e4ba5a874/observability-ui-dashboards/0.log" Jan 27 19:40:29 crc kubenswrapper[4907]: I0127 19:40:29.022235 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7x4fp_812bcca3-8896-4492-86ff-1df596f0e604/operator/0.log" Jan 27 19:40:29 crc kubenswrapper[4907]: I0127 19:40:29.060317 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-65v8r_99183c02-34c0-4a91-9e6e-0efd5d2a7a42/perses-operator/0.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.555229 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/kube-rbac-proxy/0.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.682652 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/1.log" Jan 27 19:40:44 crc kubenswrapper[4907]: I0127 19:40:44.713127 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b8dfd4994-zw4xr_6347c63b-e1fb-4570-a350-68a9f9f1b79b/manager/0.log" Jan 27 19:41:11 crc kubenswrapper[4907]: E0127 19:41:11.256341 4907 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:40824->38.102.83.184:45697: write tcp 38.102.83.184:40824->38.102.83.184:45697: write: broken pipe Jan 27 19:41:26 crc kubenswrapper[4907]: I0127 19:41:26.521113 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:41:26 crc kubenswrapper[4907]: I0127 19:41:26.521663 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:41:56 crc kubenswrapper[4907]: I0127 19:41:56.521347 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:41:56 crc kubenswrapper[4907]: I0127 19:41:56.522021 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:03 crc kubenswrapper[4907]: I0127 19:42:03.546975 4907 scope.go:117] "RemoveContainer" containerID="4bb6fd9aa02800220ee37faf75938e54ec1292809dbcefc01dfb51fb4fde5f23" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.664091 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665594 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-content" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665615 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-content" Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665655 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665663 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: E0127 19:42:05.665677 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-utilities" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.665686 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="extract-utilities" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.666004 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39e116a-bdc4-4a6a-94dd-6e1814c9532d" containerName="registry-server" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.668278 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.695786 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835538 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835674 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.835879 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.941413 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.941943 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.942028 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.943833 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:05 crc kubenswrapper[4907]: I0127 19:42:05.944090 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:06 crc kubenswrapper[4907]: I0127 19:42:06.055547 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"certified-operators-tlkk7\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:06 crc kubenswrapper[4907]: I0127 19:42:06.295686 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:07 crc kubenswrapper[4907]: I0127 19:42:07.316699 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:07 crc kubenswrapper[4907]: I0127 19:42:07.434079 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"b1a93475143b5e6aaa1f41b20b5f1ca3e9c28fb27e98f77735e4e68a0d1e4648"} Jan 27 19:42:08 crc kubenswrapper[4907]: I0127 19:42:08.449875 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c" exitCode=0 Jan 27 19:42:08 crc kubenswrapper[4907]: I0127 19:42:08.449928 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c"} Jan 27 19:42:08 crc kubenswrapper[4907]: E0127 19:42:08.539696 4907 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b177ad6_c99a_4f62_8fd6_c223bc910e39.slice/crio-3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:42:09 crc kubenswrapper[4907]: I0127 19:42:09.464410 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0"} Jan 27 19:42:11 crc kubenswrapper[4907]: I0127 19:42:11.487864 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0" exitCode=0 Jan 27 19:42:11 crc kubenswrapper[4907]: I0127 19:42:11.487911 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0"} Jan 27 19:42:12 crc kubenswrapper[4907]: I0127 19:42:12.522485 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerStarted","Data":"5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15"} Jan 27 19:42:12 crc kubenswrapper[4907]: I0127 19:42:12.560121 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlkk7" podStartSLOduration=4.146293071 podStartE2EDuration="7.560096819s" podCreationTimestamp="2026-01-27 19:42:05 +0000 UTC" firstStartedPulling="2026-01-27 19:42:08.451824579 +0000 UTC m=+5783.581107201" lastFinishedPulling="2026-01-27 19:42:11.865628337 +0000 UTC m=+5786.994910949" observedRunningTime="2026-01-27 19:42:12.541502053 +0000 UTC m=+5787.670784665" watchObservedRunningTime="2026-01-27 19:42:12.560096819 +0000 UTC m=+5787.689379431" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.296484 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.298690 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:16 crc kubenswrapper[4907]: I0127 19:42:16.412145 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:18 crc kubenswrapper[4907]: I0127 19:42:18.012495 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:18 crc kubenswrapper[4907]: I0127 19:42:18.071542 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:19 crc kubenswrapper[4907]: I0127 19:42:19.641792 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlkk7" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" containerID="cri-o://5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" gracePeriod=2 Jan 27 19:42:20 crc kubenswrapper[4907]: I0127 19:42:20.670675 4907 generic.go:334] "Generic (PLEG): container finished" podID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerID="5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" exitCode=0 Jan 27 19:42:20 crc kubenswrapper[4907]: I0127 19:42:20.671021 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15"} Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.383991 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.479833 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.480323 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.480374 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") pod \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\" (UID: \"4b177ad6-c99a-4f62-8fd6-c223bc910e39\") " Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.481196 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities" (OuterVolumeSpecName: "utilities") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.493105 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw" (OuterVolumeSpecName: "kube-api-access-nwgbw") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "kube-api-access-nwgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.551801 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b177ad6-c99a-4f62-8fd6-c223bc910e39" (UID: "4b177ad6-c99a-4f62-8fd6-c223bc910e39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584851 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584903 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b177ad6-c99a-4f62-8fd6-c223bc910e39-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.584919 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgbw\" (UniqueName: \"kubernetes.io/projected/4b177ad6-c99a-4f62-8fd6-c223bc910e39-kube-api-access-nwgbw\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.684941 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlkk7" event={"ID":"4b177ad6-c99a-4f62-8fd6-c223bc910e39","Type":"ContainerDied","Data":"b1a93475143b5e6aaa1f41b20b5f1ca3e9c28fb27e98f77735e4e68a0d1e4648"} Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.684994 4907 scope.go:117] "RemoveContainer" containerID="5c00d456797de61e00f45dd4070b12b3a002b3639335c06a706aa199d7087b15" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.685045 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlkk7" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.715238 4907 scope.go:117] "RemoveContainer" containerID="4113f1872446d9c253a1e3d0c8e0e5dc7dbcd71a0c0465ae8e72342d66d5e5c0" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.760060 4907 scope.go:117] "RemoveContainer" containerID="3e78715a04dd5d0ce8f82e1025e55d0106adc72fc6bad70e8b459d8116c04d6c" Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.804029 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:21 crc kubenswrapper[4907]: I0127 19:42:21.807543 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlkk7"] Jan 27 19:42:23 crc kubenswrapper[4907]: I0127 19:42:23.771153 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" path="/var/lib/kubelet/pods/4b177ad6-c99a-4f62-8fd6-c223bc910e39/volumes" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.520996 4907 patch_prober.go:28] interesting pod/machine-config-daemon-wgvjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.521724 4907 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.521780 4907 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.522877 4907 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.522940 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerName="machine-config-daemon" containerID="cri-o://f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" gracePeriod=600 Jan 27 19:42:26 crc kubenswrapper[4907]: E0127 19:42:26.641863 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746054 4907 generic.go:334] "Generic (PLEG): container finished" podID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" exitCode=0 Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746086 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" event={"ID":"437f8dd5-d37d-4b51-a08f-8c68b3bc038a","Type":"ContainerDied","Data":"f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12"} Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746121 4907 scope.go:117] "RemoveContainer" containerID="54094998ff2bbae779505a5ecd55f38a49e0c980c5658af5dfa09c0890c1088f" Jan 27 19:42:26 crc kubenswrapper[4907]: I0127 19:42:26.746546 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:26 crc kubenswrapper[4907]: E0127 19:42:26.746868 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:38 crc kubenswrapper[4907]: I0127 19:42:38.748578 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:38 crc kubenswrapper[4907]: E0127 19:42:38.749377 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:49 crc kubenswrapper[4907]: I0127 19:42:49.749207 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:42:49 crc kubenswrapper[4907]: E0127 19:42:49.750027 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.103069 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" exitCode=0 Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.103471 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cl48g/must-gather-s7n67" event={"ID":"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61","Type":"ContainerDied","Data":"7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538"} Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.104109 4907 scope.go:117] "RemoveContainer" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" Jan 27 19:42:57 crc kubenswrapper[4907]: I0127 19:42:57.400505 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/gather/0.log" Jan 27 19:43:01 crc kubenswrapper[4907]: I0127 19:43:01.748405 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:01 crc kubenswrapper[4907]: E0127 19:43:01.749191 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:03 crc kubenswrapper[4907]: I0127 19:43:03.637802 4907 scope.go:117] "RemoveContainer" containerID="7ee1baf3c951ba0f96d98c51cb27f55aa01d16ba7f006a79294f4c292c0dc22c" Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.771368 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.772400 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cl48g/must-gather-s7n67" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" containerID="cri-o://fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" gracePeriod=2 Jan 27 19:43:05 crc kubenswrapper[4907]: I0127 19:43:05.785719 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cl48g/must-gather-s7n67"] Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.215093 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.215720 4907 generic.go:334] "Generic (PLEG): container finished" podID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerID="fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" exitCode=143 Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.534061 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.534764 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.644892 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") pod \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.645117 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") pod \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\" (UID: \"1ae9ac3e-3958-41d3-ab5f-1da8a8535f61\") " Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.653282 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx" (OuterVolumeSpecName: "kube-api-access-phssx") pod "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" (UID: "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61"). InnerVolumeSpecName "kube-api-access-phssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.755161 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phssx\" (UniqueName: \"kubernetes.io/projected/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-kube-api-access-phssx\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.929393 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" (UID: "1ae9ac3e-3958-41d3-ab5f-1da8a8535f61"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4907]: I0127 19:43:06.959266 4907 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.227930 4907 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cl48g_must-gather-s7n67_1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/copy/0.log" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.228430 4907 scope.go:117] "RemoveContainer" containerID="fda1c70437b579b45625ba8bd319bb1f5ded3001420d1c91a8083242ad820aee" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.228471 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cl48g/must-gather-s7n67" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.264577 4907 scope.go:117] "RemoveContainer" containerID="7fa3c367ecb844ced4a80559c00b90adfb8a76e8f87e035467f2e575ec58c538" Jan 27 19:43:07 crc kubenswrapper[4907]: I0127 19:43:07.761012 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" path="/var/lib/kubelet/pods/1ae9ac3e-3958-41d3-ab5f-1da8a8535f61/volumes" Jan 27 19:43:15 crc kubenswrapper[4907]: I0127 19:43:15.757364 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:15 crc kubenswrapper[4907]: E0127 19:43:15.758670 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:30 crc kubenswrapper[4907]: I0127 19:43:30.749169 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:30 crc kubenswrapper[4907]: E0127 19:43:30.751378 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:41 crc kubenswrapper[4907]: I0127 19:43:41.750001 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:41 crc kubenswrapper[4907]: E0127 19:43:41.750921 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:43:54 crc kubenswrapper[4907]: I0127 19:43:54.748978 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:43:54 crc kubenswrapper[4907]: E0127 19:43:54.750429 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.755862 4907 scope.go:117] "RemoveContainer" containerID="2cf8c64730830719caf33eccab19ea6a740f56201d48faedd2b0964a99b14a4a" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.789736 4907 scope.go:117] "RemoveContainer" containerID="0e0f8e9fc895cde38ae60b21c1a52e52e3f7fdf11973e60bf794410236541eb9" Jan 27 19:44:03 crc kubenswrapper[4907]: I0127 19:44:03.821218 4907 scope.go:117] "RemoveContainer" containerID="9e6cbd06744b574913b53f660db9189b287529c9f60c390d7656bf7d99231bfe" Jan 27 19:44:06 crc kubenswrapper[4907]: I0127 19:44:06.749697 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:06 crc kubenswrapper[4907]: E0127 19:44:06.751405 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:20 crc kubenswrapper[4907]: I0127 19:44:20.749197 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:20 crc kubenswrapper[4907]: E0127 19:44:20.750261 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:32 crc kubenswrapper[4907]: I0127 19:44:32.749071 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:32 crc kubenswrapper[4907]: E0127 19:44:32.750510 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.561647 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562362 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-utilities" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562398 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-utilities" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562437 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-content" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562450 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="extract-content" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562480 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562510 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562537 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562548 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: E0127 19:44:33.562597 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562610 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.562998 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b177ad6-c99a-4f62-8fd6-c223bc910e39" containerName="registry-server" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.563031 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="gather" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.563106 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae9ac3e-3958-41d3-ab5f-1da8a8535f61" containerName="copy" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.566690 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.596304 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.698856 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.699227 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.699269 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801197 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801312 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801355 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801691 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-utilities\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.801787 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b700555d-4c61-4c37-9536-b4656d126ac4-catalog-content\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.822725 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq4m4\" (UniqueName: \"kubernetes.io/projected/b700555d-4c61-4c37-9536-b4656d126ac4-kube-api-access-zq4m4\") pod \"community-operators-hj4zw\" (UID: \"b700555d-4c61-4c37-9536-b4656d126ac4\") " pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:33 crc kubenswrapper[4907]: I0127 19:44:33.895264 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:34 crc kubenswrapper[4907]: I0127 19:44:34.456613 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.314863 4907 generic.go:334] "Generic (PLEG): container finished" podID="b700555d-4c61-4c37-9536-b4656d126ac4" containerID="aa0972d1fee961820af9ec68cf0ff0330732cc70ca944e082e27b2c6aa99bd25" exitCode=0 Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.314989 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerDied","Data":"aa0972d1fee961820af9ec68cf0ff0330732cc70ca944e082e27b2c6aa99bd25"} Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.315201 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"23e76a11773b360e0fc49bf6a99054330ebd902d5638be83ff86a491578fd201"} Jan 27 19:44:35 crc kubenswrapper[4907]: I0127 19:44:35.316896 4907 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:44:41 crc kubenswrapper[4907]: I0127 19:44:41.382924 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494"} Jan 27 19:44:42 crc kubenswrapper[4907]: I0127 19:44:42.394499 4907 generic.go:334] "Generic (PLEG): container finished" podID="b700555d-4c61-4c37-9536-b4656d126ac4" containerID="e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494" exitCode=0 Jan 27 19:44:42 crc kubenswrapper[4907]: I0127 19:44:42.394590 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerDied","Data":"e134d6755c86e52f6bec25cc82b163cc514bb35f0adcb3ce30bf391f09442494"} Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.413533 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4zw" event={"ID":"b700555d-4c61-4c37-9536-b4656d126ac4","Type":"ContainerStarted","Data":"5c02b65095acabde9976dba4a61ead897b760db462b0ca410ddae2af84e50437"} Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.443096 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hj4zw" podStartSLOduration=2.890748245 podStartE2EDuration="10.443077258s" podCreationTimestamp="2026-01-27 19:44:33 +0000 UTC" firstStartedPulling="2026-01-27 19:44:35.316625887 +0000 UTC m=+5930.445908499" lastFinishedPulling="2026-01-27 19:44:42.8689549 +0000 UTC m=+5937.998237512" observedRunningTime="2026-01-27 19:44:43.432292383 +0000 UTC m=+5938.561574995" watchObservedRunningTime="2026-01-27 19:44:43.443077258 +0000 UTC m=+5938.572359870" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.748823 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:43 crc kubenswrapper[4907]: E0127 19:44:43.749288 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.896217 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:43 crc kubenswrapper[4907]: I0127 19:44:43.896279 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:44 crc kubenswrapper[4907]: I0127 19:44:44.957274 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hj4zw" podUID="b700555d-4c61-4c37-9536-b4656d126ac4" containerName="registry-server" probeResult="failure" output=< Jan 27 19:44:44 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:44:44 crc kubenswrapper[4907]: > Jan 27 19:44:53 crc kubenswrapper[4907]: I0127 19:44:53.950854 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.028811 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hj4zw" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.130051 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4zw"] Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.227856 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.228199 4907 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhv2c" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" containerID="cri-o://e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" gracePeriod=2 Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.578187 4907 generic.go:334] "Generic (PLEG): container finished" podID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerID="e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" exitCode=0 Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.579751 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc"} Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.796698 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879022 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879188 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.879327 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") pod \"bae6221e-526b-4cc4-9f9b-1079238c9100\" (UID: \"bae6221e-526b-4cc4-9f9b-1079238c9100\") " Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.901748 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities" (OuterVolumeSpecName: "utilities") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.902005 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx" (OuterVolumeSpecName: "kube-api-access-42ntx") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "kube-api-access-42ntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.961313 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bae6221e-526b-4cc4-9f9b-1079238c9100" (UID: "bae6221e-526b-4cc4-9f9b-1079238c9100"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.982686 4907 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.982998 4907 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae6221e-526b-4cc4-9f9b-1079238c9100-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:54 crc kubenswrapper[4907]: I0127 19:44:54.983011 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42ntx\" (UniqueName: \"kubernetes.io/projected/bae6221e-526b-4cc4-9f9b-1079238c9100-kube-api-access-42ntx\") on node \"crc\" DevicePath \"\"" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.593471 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhv2c" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.602275 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhv2c" event={"ID":"bae6221e-526b-4cc4-9f9b-1079238c9100","Type":"ContainerDied","Data":"9af132d6e463262eafbf32983cfb3b57f393e93d04c9a4adb9279236600176e5"} Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.602332 4907 scope.go:117] "RemoveContainer" containerID="e575043c95bcc3816c2d34c76628c5d1837e3db19e0b22a6a4f4f7c688dfd5fc" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.632735 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.636690 4907 scope.go:117] "RemoveContainer" containerID="9767b9bd6335f81b22f4e7d1b7fb00bd57b538db401b0811063af3d06773de87" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.646903 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhv2c"] Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.681943 4907 scope.go:117] "RemoveContainer" containerID="74c450a7c4e4a16e788bf96635acd49f01f09f365c5a97fb77a1f1947ba88ae4" Jan 27 19:44:55 crc kubenswrapper[4907]: I0127 19:44:55.781342 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" path="/var/lib/kubelet/pods/bae6221e-526b-4cc4-9f9b-1079238c9100/volumes" Jan 27 19:44:56 crc kubenswrapper[4907]: I0127 19:44:56.748739 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:44:56 crc kubenswrapper[4907]: E0127 19:44:56.749619 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.202951 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204158 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204173 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204204 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204212 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4907]: E0127 19:45:00.204228 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204237 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.204618 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae6221e-526b-4cc4-9f9b-1079238c9100" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.205651 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.219668 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.228248 4907 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.228842 4907 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323196 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323365 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.323768 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426618 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426741 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.426872 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.427815 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.444421 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.453455 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"collect-profiles-29492385-lxvh4\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:00 crc kubenswrapper[4907]: I0127 19:45:00.539746 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.025981 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4"] Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.667905 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerStarted","Data":"ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a"} Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.668310 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerStarted","Data":"45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06"} Jan 27 19:45:01 crc kubenswrapper[4907]: I0127 19:45:01.709027 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" podStartSLOduration=1.709004367 podStartE2EDuration="1.709004367s" podCreationTimestamp="2026-01-27 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:45:01.695550897 +0000 UTC m=+5956.824833519" watchObservedRunningTime="2026-01-27 19:45:01.709004367 +0000 UTC m=+5956.838286999" Jan 27 19:45:02 crc kubenswrapper[4907]: I0127 19:45:02.684929 4907 generic.go:334] "Generic (PLEG): container finished" podID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerID="ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a" exitCode=0 Jan 27 19:45:02 crc kubenswrapper[4907]: I0127 19:45:02.685426 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerDied","Data":"ceb5fd244e2e4ed7566cf19be5269e42d507c5fa56a3527599d5ed553232d56a"} Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.117601 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.241670 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.241949 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.242038 4907 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") pod \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\" (UID: \"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0\") " Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.242679 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.243028 4907 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.248999 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.249027 4907 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9" (OuterVolumeSpecName: "kube-api-access-h9cg9") pod "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" (UID: "8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0"). InnerVolumeSpecName "kube-api-access-h9cg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.345227 4907 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9cg9\" (UniqueName: \"kubernetes.io/projected/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-kube-api-access-h9cg9\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.345518 4907 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.721931 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" event={"ID":"8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0","Type":"ContainerDied","Data":"45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06"} Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.721969 4907 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45327a01c4ebde93b6e11703d694de6e4e93f8b2517b09d769c382c1c4893d06" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.722015 4907 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-lxvh4" Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.773522 4907 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:45:04 crc kubenswrapper[4907]: I0127 19:45:04.785445 4907 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-9z9c6"] Jan 27 19:45:05 crc kubenswrapper[4907]: I0127 19:45:05.766628 4907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8f8a81-de05-4458-b8bc-4031caa5a02c" path="/var/lib/kubelet/pods/2b8f8a81-de05-4458-b8bc-4031caa5a02c/volumes" Jan 27 19:45:10 crc kubenswrapper[4907]: I0127 19:45:10.748414 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:10 crc kubenswrapper[4907]: E0127 19:45:10.749533 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:22 crc kubenswrapper[4907]: I0127 19:45:22.748326 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:22 crc kubenswrapper[4907]: E0127 19:45:22.749452 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:35 crc kubenswrapper[4907]: I0127 19:45:35.749748 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:35 crc kubenswrapper[4907]: E0127 19:45:35.750805 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:45:50 crc kubenswrapper[4907]: I0127 19:45:50.748389 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:45:50 crc kubenswrapper[4907]: E0127 19:45:50.749348 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:03 crc kubenswrapper[4907]: I0127 19:46:03.995358 4907 scope.go:117] "RemoveContainer" containerID="730476889ff7c89dc83c11f4812e47c9e0e69e6dd2218580d51c13a19fd1dd08" Jan 27 19:46:04 crc kubenswrapper[4907]: I0127 19:46:04.748102 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:04 crc kubenswrapper[4907]: E0127 19:46:04.748728 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.666087 4907 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:06 crc kubenswrapper[4907]: E0127 19:46:06.668046 4907 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.668174 4907 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.668631 4907 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8ddc0f-70e8-4b43-8dbb-4c5fd7cedac0" containerName="collect-profiles" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.671513 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.689302 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.751723 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.754920 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.755324 4907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858573 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858873 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.858974 4907 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.859120 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-utilities\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.859713 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/375d7447-f391-4503-b44d-738db6a38564-catalog-content\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:06 crc kubenswrapper[4907]: I0127 19:46:06.888178 4907 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fk5\" (UniqueName: \"kubernetes.io/projected/375d7447-f391-4503-b44d-738db6a38564-kube-api-access-99fk5\") pod \"redhat-operators-p64zx\" (UID: \"375d7447-f391-4503-b44d-738db6a38564\") " pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:07 crc kubenswrapper[4907]: I0127 19:46:07.011154 4907 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:07 crc kubenswrapper[4907]: I0127 19:46:07.539041 4907 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p64zx"] Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493005 4907 generic.go:334] "Generic (PLEG): container finished" podID="375d7447-f391-4503-b44d-738db6a38564" containerID="3abb88824085ee899bccbe1ed3e38c0cc719189897fcb43aeba249ef706f0fc6" exitCode=0 Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493179 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"3abb88824085ee899bccbe1ed3e38c0cc719189897fcb43aeba249ef706f0fc6"} Jan 27 19:46:08 crc kubenswrapper[4907]: I0127 19:46:08.493544 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"16f74ed1ee4b667af4e334ece97cf14d7e101982b54a7105876827cd8c90beda"} Jan 27 19:46:10 crc kubenswrapper[4907]: I0127 19:46:10.519752 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7"} Jan 27 19:46:17 crc kubenswrapper[4907]: I0127 19:46:17.603716 4907 generic.go:334] "Generic (PLEG): container finished" podID="375d7447-f391-4503-b44d-738db6a38564" containerID="e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7" exitCode=0 Jan 27 19:46:17 crc kubenswrapper[4907]: I0127 19:46:17.603823 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerDied","Data":"e1264200a8a870986601fa5b7e908030bd6cb3ffb0dc855267ed8149420b7bc7"} Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.615655 4907 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p64zx" event={"ID":"375d7447-f391-4503-b44d-738db6a38564","Type":"ContainerStarted","Data":"aee5b3283b745b829bc57d89b62142c2f34e63655ee6d96fcc73449a76e8c53d"} Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.639656 4907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p64zx" podStartSLOduration=2.881088512 podStartE2EDuration="12.639633473s" podCreationTimestamp="2026-01-27 19:46:06 +0000 UTC" firstStartedPulling="2026-01-27 19:46:08.496264465 +0000 UTC m=+6023.625547077" lastFinishedPulling="2026-01-27 19:46:18.254809436 +0000 UTC m=+6033.384092038" observedRunningTime="2026-01-27 19:46:18.634380275 +0000 UTC m=+6033.763662907" watchObservedRunningTime="2026-01-27 19:46:18.639633473 +0000 UTC m=+6033.768916085" Jan 27 19:46:18 crc kubenswrapper[4907]: I0127 19:46:18.748703 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:18 crc kubenswrapper[4907]: E0127 19:46:18.749203 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:27 crc kubenswrapper[4907]: I0127 19:46:27.011889 4907 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:27 crc kubenswrapper[4907]: I0127 19:46:27.012527 4907 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p64zx" Jan 27 19:46:28 crc kubenswrapper[4907]: I0127 19:46:28.069580 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:28 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:28 crc kubenswrapper[4907]: > Jan 27 19:46:33 crc kubenswrapper[4907]: I0127 19:46:33.748840 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:33 crc kubenswrapper[4907]: E0127 19:46:33.749514 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:38 crc kubenswrapper[4907]: I0127 19:46:38.067267 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:38 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:38 crc kubenswrapper[4907]: > Jan 27 19:46:44 crc kubenswrapper[4907]: I0127 19:46:44.748412 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:44 crc kubenswrapper[4907]: E0127 19:46:44.749868 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:48 crc kubenswrapper[4907]: I0127 19:46:48.092216 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:48 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:48 crc kubenswrapper[4907]: > Jan 27 19:46:55 crc kubenswrapper[4907]: I0127 19:46:55.789682 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:46:55 crc kubenswrapper[4907]: E0127 19:46:55.790995 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:46:58 crc kubenswrapper[4907]: I0127 19:46:58.067403 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:58 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:58 crc kubenswrapper[4907]: > Jan 27 19:47:07 crc kubenswrapper[4907]: I0127 19:47:07.749177 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:47:07 crc kubenswrapper[4907]: E0127 19:47:07.750160 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a" Jan 27 19:47:08 crc kubenswrapper[4907]: I0127 19:47:08.071118 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:47:08 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:47:08 crc kubenswrapper[4907]: > Jan 27 19:47:18 crc kubenswrapper[4907]: I0127 19:47:18.085106 4907 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p64zx" podUID="375d7447-f391-4503-b44d-738db6a38564" containerName="registry-server" probeResult="failure" output=< Jan 27 19:47:18 crc kubenswrapper[4907]: timeout: failed to connect service ":50051" within 1s Jan 27 19:47:18 crc kubenswrapper[4907]: > Jan 27 19:47:21 crc kubenswrapper[4907]: I0127 19:47:21.748419 4907 scope.go:117] "RemoveContainer" containerID="f5b44ba932606ad26780ac4b599832b1d16676d016109d1954bf05c995d9ea12" Jan 27 19:47:21 crc kubenswrapper[4907]: E0127 19:47:21.749604 4907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wgvjh_openshift-machine-config-operator(437f8dd5-d37d-4b51-a08f-8c68b3bc038a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wgvjh" podUID="437f8dd5-d37d-4b51-a08f-8c68b3bc038a"